00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v23.11" build number 598 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3260 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.132 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.132 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.135 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.156 Fetching changes from the remote Git repository 00:00:00.158 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.177 Using shallow fetch with depth 1 00:00:00.177 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.177 > git --version # timeout=10 00:00:00.192 > git --version # 'git version 2.39.2' 00:00:00.192 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.203 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.203 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.172 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.182 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.193 Checking out Revision 4b79378c7834917407ff4d2cff4edf1dcbb13c5f (FETCH_HEAD) 00:00:06.193 > git config core.sparsecheckout # timeout=10 00:00:06.202 > git read-tree -mu HEAD # timeout=10 00:00:06.218 > git checkout -f 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=5 00:00:06.239 Commit message: "jbp-per-patch: add create-perf-report job as a part of testing" 00:00:06.239 > git rev-list --no-walk 4b79378c7834917407ff4d2cff4edf1dcbb13c5f # timeout=10 00:00:06.319 [Pipeline] Start of Pipeline 00:00:06.332 [Pipeline] library 00:00:06.334 Loading library shm_lib@master 00:00:06.334 Library shm_lib@master is cached. Copying from home. 00:00:06.349 [Pipeline] node 00:00:06.363 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.364 [Pipeline] { 00:00:06.373 [Pipeline] catchError 00:00:06.374 [Pipeline] { 00:00:06.385 [Pipeline] wrap 00:00:06.393 [Pipeline] { 00:00:06.399 [Pipeline] stage 00:00:06.400 [Pipeline] { (Prologue) 00:00:06.568 [Pipeline] sh 00:00:06.858 + logger -p user.info -t JENKINS-CI 00:00:06.877 [Pipeline] echo 00:00:06.879 Node: GP11 00:00:06.886 [Pipeline] sh 00:00:07.188 [Pipeline] setCustomBuildProperty 00:00:07.200 [Pipeline] echo 00:00:07.202 Cleanup processes 00:00:07.207 [Pipeline] sh 00:00:07.492 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.493 663298 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.506 [Pipeline] sh 00:00:07.790 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.790 ++ grep -v 'sudo pgrep' 00:00:07.790 ++ awk '{print $1}' 00:00:07.790 + sudo kill -9 00:00:07.790 + true 00:00:07.806 [Pipeline] cleanWs 00:00:07.816 [WS-CLEANUP] Deleting project workspace... 00:00:07.816 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.823 [WS-CLEANUP] done 00:00:07.827 [Pipeline] setCustomBuildProperty 00:00:07.843 [Pipeline] sh 00:00:08.129 + sudo git config --global --replace-all safe.directory '*' 00:00:08.226 [Pipeline] httpRequest 00:00:08.249 [Pipeline] echo 00:00:08.252 Sorcerer 10.211.164.101 is alive 00:00:08.260 [Pipeline] httpRequest 00:00:08.264 HttpMethod: GET 00:00:08.265 URL: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.266 Sending request to url: http://10.211.164.101/packages/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:08.284 Response Code: HTTP/1.1 200 OK 00:00:08.285 Success: Status code 200 is in the accepted range: 200,404 00:00:08.286 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:29.853 [Pipeline] sh 00:00:30.142 + tar --no-same-owner -xf jbp_4b79378c7834917407ff4d2cff4edf1dcbb13c5f.tar.gz 00:00:30.156 [Pipeline] httpRequest 00:00:30.191 [Pipeline] echo 00:00:30.192 Sorcerer 10.211.164.101 is alive 00:00:30.200 [Pipeline] httpRequest 00:00:30.204 HttpMethod: GET 00:00:30.205 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:30.206 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:30.222 Response Code: HTTP/1.1 200 OK 00:00:30.223 Success: Status code 200 is in the accepted range: 200,404 00:00:30.223 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:55.628 [Pipeline] sh 00:00:55.913 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:58.459 [Pipeline] sh 00:00:58.740 + git -C spdk log --oneline -n5 00:00:58.740 719d03c6a sock/uring: only register net impl if supported 00:00:58.741 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:00:58.741 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:00:58.741 6c7c1f57e accel: add sequence outstanding stat 00:00:58.741 3bc8e6a26 accel: add utility to put task 00:00:58.759 [Pipeline] withCredentials 00:00:58.772 > git --version # timeout=10 00:00:58.781 > git --version # 'git version 2.39.2' 00:00:58.798 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:00:58.801 [Pipeline] { 00:00:58.810 [Pipeline] retry 00:00:58.811 [Pipeline] { 00:00:58.828 [Pipeline] sh 00:00:59.110 + git ls-remote http://dpdk.org/git/dpdk-stable v23.11 00:01:01.033 [Pipeline] } 00:01:01.055 [Pipeline] // retry 00:01:01.061 [Pipeline] } 00:01:01.082 [Pipeline] // withCredentials 00:01:01.092 [Pipeline] httpRequest 00:01:01.111 [Pipeline] echo 00:01:01.113 Sorcerer 10.211.164.101 is alive 00:01:01.121 [Pipeline] httpRequest 00:01:01.125 HttpMethod: GET 00:01:01.125 URL: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:01.126 Sending request to url: http://10.211.164.101/packages/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:01.129 Response Code: HTTP/1.1 200 OK 00:01:01.129 Success: Status code 200 is in the accepted range: 200,404 00:01:01.129 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:09.694 [Pipeline] sh 00:01:09.981 + tar --no-same-owner -xf dpdk_d15625009dced269fcec27fc81dd74fd58d54cdb.tar.gz 00:01:11.893 [Pipeline] sh 00:01:12.179 + git -C dpdk log --oneline -n5 00:01:12.179 eeb0605f11 version: 23.11.0 00:01:12.179 238778122a doc: update release notes for 23.11 00:01:12.179 46aa6b3cfc doc: fix description of RSS features 00:01:12.179 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:12.179 7e421ae345 devtools: support skipping forbid rule check 00:01:12.191 [Pipeline] } 00:01:12.208 [Pipeline] // stage 00:01:12.217 [Pipeline] stage 00:01:12.219 [Pipeline] { (Prepare) 00:01:12.247 [Pipeline] writeFile 00:01:12.267 [Pipeline] sh 00:01:12.552 + logger -p user.info -t JENKINS-CI 00:01:12.566 [Pipeline] sh 00:01:12.849 + logger -p user.info -t JENKINS-CI 00:01:12.861 [Pipeline] sh 00:01:13.145 + cat autorun-spdk.conf 00:01:13.145 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.145 SPDK_TEST_NVMF=1 00:01:13.145 SPDK_TEST_NVME_CLI=1 00:01:13.145 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.145 SPDK_TEST_NVMF_NICS=e810 00:01:13.146 SPDK_TEST_VFIOUSER=1 00:01:13.146 SPDK_RUN_UBSAN=1 00:01:13.146 NET_TYPE=phy 00:01:13.146 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:13.146 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.153 RUN_NIGHTLY=1 00:01:13.161 [Pipeline] readFile 00:01:13.196 [Pipeline] withEnv 00:01:13.198 [Pipeline] { 00:01:13.213 [Pipeline] sh 00:01:13.499 + set -ex 00:01:13.499 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:13.499 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.499 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.499 ++ SPDK_TEST_NVMF=1 00:01:13.499 ++ SPDK_TEST_NVME_CLI=1 00:01:13.499 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.499 ++ SPDK_TEST_NVMF_NICS=e810 00:01:13.499 ++ SPDK_TEST_VFIOUSER=1 00:01:13.499 ++ SPDK_RUN_UBSAN=1 00:01:13.499 ++ NET_TYPE=phy 00:01:13.499 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:13.499 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:13.499 ++ RUN_NIGHTLY=1 00:01:13.499 + case $SPDK_TEST_NVMF_NICS in 00:01:13.499 + DRIVERS=ice 00:01:13.499 + [[ tcp == \r\d\m\a ]] 00:01:13.499 + [[ -n ice ]] 00:01:13.499 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:13.499 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:13.499 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:13.499 rmmod: ERROR: Module irdma is not currently loaded 00:01:13.499 rmmod: ERROR: Module i40iw is not currently loaded 00:01:13.499 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:13.499 + true 00:01:13.499 + for D in $DRIVERS 00:01:13.499 + sudo modprobe ice 00:01:13.499 + exit 0 00:01:13.515 [Pipeline] } 00:01:13.534 [Pipeline] // withEnv 00:01:13.539 [Pipeline] } 00:01:13.557 [Pipeline] // stage 00:01:13.568 [Pipeline] catchError 00:01:13.569 [Pipeline] { 00:01:13.585 [Pipeline] timeout 00:01:13.585 Timeout set to expire in 50 min 00:01:13.587 [Pipeline] { 00:01:13.602 [Pipeline] stage 00:01:13.604 [Pipeline] { (Tests) 00:01:13.621 [Pipeline] sh 00:01:13.912 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.912 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.912 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.912 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:13.912 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:13.912 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:13.912 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.912 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:13.912 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:13.912 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:13.912 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:13.912 + source /etc/os-release 00:01:13.912 ++ NAME='Fedora Linux' 00:01:13.912 ++ VERSION='38 (Cloud Edition)' 00:01:13.912 ++ ID=fedora 00:01:13.912 ++ VERSION_ID=38 00:01:13.912 ++ VERSION_CODENAME= 00:01:13.912 ++ PLATFORM_ID=platform:f38 00:01:13.912 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:13.912 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:13.912 ++ LOGO=fedora-logo-icon 00:01:13.912 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:13.912 ++ HOME_URL=https://fedoraproject.org/ 00:01:13.912 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:13.912 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:13.912 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:13.912 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:13.912 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:13.912 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:13.912 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:13.912 ++ SUPPORT_END=2024-05-14 00:01:13.912 ++ VARIANT='Cloud Edition' 00:01:13.912 ++ VARIANT_ID=cloud 00:01:13.912 + uname -a 00:01:13.912 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:13.912 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:14.850 Hugepages 00:01:14.851 node hugesize free / total 00:01:14.851 node0 1048576kB 0 / 0 00:01:14.851 node0 2048kB 0 / 0 00:01:14.851 node1 1048576kB 0 / 0 00:01:14.851 node1 2048kB 0 / 0 00:01:14.851 00:01:14.851 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:14.851 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:14.851 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:14.851 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:14.851 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:14.851 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:14.851 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:14.851 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:15.109 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:15.109 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:15.109 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:15.109 + rm -f /tmp/spdk-ld-path 00:01:15.109 + source autorun-spdk.conf 00:01:15.109 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.109 ++ SPDK_TEST_NVMF=1 00:01:15.109 ++ SPDK_TEST_NVME_CLI=1 00:01:15.109 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.109 ++ SPDK_TEST_NVMF_NICS=e810 00:01:15.109 ++ SPDK_TEST_VFIOUSER=1 00:01:15.109 ++ SPDK_RUN_UBSAN=1 00:01:15.109 ++ NET_TYPE=phy 00:01:15.109 ++ SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.109 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.109 ++ RUN_NIGHTLY=1 00:01:15.109 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:15.109 + [[ -n '' ]] 00:01:15.109 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:15.109 + for M in /var/spdk/build-*-manifest.txt 00:01:15.109 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:15.109 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.109 + for M in /var/spdk/build-*-manifest.txt 00:01:15.109 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:15.109 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:15.109 ++ uname 00:01:15.109 + [[ Linux == \L\i\n\u\x ]] 00:01:15.109 + sudo dmesg -T 00:01:15.109 + sudo dmesg --clear 00:01:15.109 + dmesg_pid=664009 00:01:15.109 + [[ Fedora Linux == FreeBSD ]] 00:01:15.109 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.109 + sudo dmesg -Tw 00:01:15.109 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:15.109 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:15.109 + [[ -x /usr/src/fio-static/fio ]] 00:01:15.109 + export FIO_BIN=/usr/src/fio-static/fio 00:01:15.109 + FIO_BIN=/usr/src/fio-static/fio 00:01:15.109 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:15.109 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:15.109 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:15.109 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.109 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:15.109 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:15.109 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.109 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:15.109 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:15.109 Test configuration: 00:01:15.109 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:15.109 SPDK_TEST_NVMF=1 00:01:15.109 SPDK_TEST_NVME_CLI=1 00:01:15.109 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:15.109 SPDK_TEST_NVMF_NICS=e810 00:01:15.109 SPDK_TEST_VFIOUSER=1 00:01:15.109 SPDK_RUN_UBSAN=1 00:01:15.109 NET_TYPE=phy 00:01:15.109 SPDK_TEST_NATIVE_DPDK=v23.11 00:01:15.109 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.109 RUN_NIGHTLY=1 21:07:49 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:15.109 21:07:49 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:15.109 21:07:49 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:15.109 21:07:49 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:15.109 21:07:49 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.109 21:07:49 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.109 21:07:49 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.110 21:07:49 -- paths/export.sh@5 -- $ export PATH 00:01:15.110 21:07:49 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:15.110 21:07:49 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:15.110 21:07:49 -- common/autobuild_common.sh@444 -- $ date +%s 00:01:15.110 21:07:49 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720724869.XXXXXX 00:01:15.110 21:07:49 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720724869.oAfryc 00:01:15.110 21:07:49 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:01:15.110 21:07:49 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:01:15.110 21:07:49 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:15.110 21:07:49 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:01:15.110 21:07:49 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:15.110 21:07:49 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:15.110 21:07:49 -- common/autobuild_common.sh@460 -- $ get_config_params 00:01:15.110 21:07:49 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:15.110 21:07:49 -- common/autotest_common.sh@10 -- $ set +x 00:01:15.110 21:07:49 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:01:15.110 21:07:49 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:01:15.110 21:07:49 -- pm/common@17 -- $ local monitor 00:01:15.110 21:07:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.110 21:07:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.110 21:07:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.110 21:07:49 -- pm/common@21 -- $ date +%s 00:01:15.110 21:07:49 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:15.110 21:07:49 -- pm/common@21 -- $ date +%s 00:01:15.110 21:07:49 -- pm/common@25 -- $ sleep 1 00:01:15.110 21:07:49 -- pm/common@21 -- $ date +%s 00:01:15.110 21:07:49 -- pm/common@21 -- $ date +%s 00:01:15.110 21:07:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720724869 00:01:15.110 21:07:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720724869 00:01:15.110 21:07:49 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720724869 00:01:15.110 21:07:49 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1720724869 00:01:15.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720724869_collect-vmstat.pm.log 00:01:15.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720724869_collect-cpu-load.pm.log 00:01:15.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720724869_collect-cpu-temp.pm.log 00:01:15.369 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1720724869_collect-bmc-pm.bmc.pm.log 00:01:16.307 21:07:50 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:01:16.307 21:07:50 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:16.307 21:07:50 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:16.307 21:07:50 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.307 21:07:50 -- spdk/autobuild.sh@16 -- $ date -u 00:01:16.307 Thu Jul 11 07:07:50 PM UTC 2024 00:01:16.307 21:07:50 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:16.307 v24.09-pre-202-g719d03c6a 00:01:16.307 21:07:50 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:16.307 21:07:50 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:16.307 21:07:50 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:16.307 21:07:50 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:16.307 21:07:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.307 21:07:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.307 ************************************ 00:01:16.307 START TEST ubsan 00:01:16.308 ************************************ 00:01:16.308 21:07:50 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:16.308 using ubsan 00:01:16.308 00:01:16.308 real 0m0.000s 00:01:16.308 user 0m0.000s 00:01:16.308 sys 0m0.000s 00:01:16.308 21:07:50 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:16.308 21:07:50 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:16.308 ************************************ 00:01:16.308 END TEST ubsan 00:01:16.308 ************************************ 00:01:16.308 21:07:50 -- common/autotest_common.sh@1142 -- $ return 0 00:01:16.308 21:07:50 -- spdk/autobuild.sh@27 -- $ '[' -n v23.11 ']' 00:01:16.308 21:07:50 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:01:16.308 21:07:50 -- common/autobuild_common.sh@436 -- $ run_test build_native_dpdk _build_native_dpdk 00:01:16.308 21:07:50 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:01:16.308 21:07:50 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:16.308 21:07:50 -- common/autotest_common.sh@10 -- $ set +x 00:01:16.308 ************************************ 00:01:16.308 START TEST build_native_dpdk 00:01:16.308 ************************************ 00:01:16.308 21:07:50 build_native_dpdk -- common/autotest_common.sh@1123 -- $ _build_native_dpdk 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:01:16.308 eeb0605f11 version: 23.11.0 00:01:16.308 238778122a doc: update release notes for 23.11 00:01:16.308 46aa6b3cfc doc: fix description of RSS features 00:01:16.308 dd88f51a57 devtools: forbid DPDK API in cnxk base driver 00:01:16.308 7e421ae345 devtools: support skipping forbid rule check 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=23.11.0 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@100 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base") 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@102 -- $ local mlx5_libs_added=n 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@103 -- $ [[ 0 -eq 1 ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@139 -- $ [[ 0 -eq 1 ]] 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@167 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@168 -- $ uname -s 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@168 -- $ '[' Linux = Linux ']' 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@169 -- $ lt 23.11.0 21.11.0 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@370 -- $ cmp_versions 23.11.0 '<' 21.11.0 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@330 -- $ local ver1 ver1_l 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@331 -- $ local ver2 ver2_l 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@333 -- $ IFS=.-: 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@333 -- $ read -ra ver1 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@334 -- $ IFS=.-: 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@334 -- $ read -ra ver2 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@335 -- $ local 'op=<' 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@337 -- $ ver1_l=3 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@338 -- $ ver2_l=3 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@340 -- $ local lt=0 gt=0 eq=0 v 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@341 -- $ case "$op" in 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@342 -- $ : 1 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@361 -- $ (( v = 0 )) 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@361 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@362 -- $ decimal 23 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@350 -- $ local d=23 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 23 =~ ^[0-9]+$ ]] 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@352 -- $ echo 23 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@362 -- $ ver1[v]=23 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@363 -- $ decimal 21 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@350 -- $ local d=21 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@351 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@352 -- $ echo 21 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@363 -- $ ver2[v]=21 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@364 -- $ (( ver1[v] > ver2[v] )) 00:01:16.308 21:07:50 build_native_dpdk -- scripts/common.sh@364 -- $ return 1 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@173 -- $ patch -p1 00:01:16.308 patching file config/rte_config.h 00:01:16.308 Hunk #1 succeeded at 60 (offset 1 line). 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@177 -- $ dpdk_kmods=false 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@178 -- $ uname -s 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@178 -- $ '[' Linux = FreeBSD ']' 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@182 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base 00:01:16.308 21:07:50 build_native_dpdk -- common/autobuild_common.sh@182 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:20.494 The Meson build system 00:01:20.494 Version: 1.3.1 00:01:20.494 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:01:20.494 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:01:20.494 Build type: native build 00:01:20.494 Program cat found: YES (/usr/bin/cat) 00:01:20.494 Project name: DPDK 00:01:20.494 Project version: 23.11.0 00:01:20.494 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:20.494 C linker for the host machine: gcc ld.bfd 2.39-16 00:01:20.494 Host machine cpu family: x86_64 00:01:20.494 Host machine cpu: x86_64 00:01:20.494 Message: ## Building in Developer Mode ## 00:01:20.494 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:20.494 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:01:20.494 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:01:20.494 Program python3 found: YES (/usr/bin/python3) 00:01:20.494 Program cat found: YES (/usr/bin/cat) 00:01:20.494 config/meson.build:113: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:01:20.494 Compiler for C supports arguments -march=native: YES 00:01:20.494 Checking for size of "void *" : 8 00:01:20.494 Checking for size of "void *" : 8 (cached) 00:01:20.494 Library m found: YES 00:01:20.494 Library numa found: YES 00:01:20.494 Has header "numaif.h" : YES 00:01:20.494 Library fdt found: NO 00:01:20.494 Library execinfo found: NO 00:01:20.494 Has header "execinfo.h" : YES 00:01:20.494 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:20.494 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:20.494 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:20.494 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:20.494 Run-time dependency openssl found: YES 3.0.9 00:01:20.494 Run-time dependency libpcap found: YES 1.10.4 00:01:20.494 Has header "pcap.h" with dependency libpcap: YES 00:01:20.494 Compiler for C supports arguments -Wcast-qual: YES 00:01:20.494 Compiler for C supports arguments -Wdeprecated: YES 00:01:20.494 Compiler for C supports arguments -Wformat: YES 00:01:20.494 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:20.494 Compiler for C supports arguments -Wformat-security: NO 00:01:20.494 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:20.494 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:20.494 Compiler for C supports arguments -Wnested-externs: YES 00:01:20.494 Compiler for C supports arguments -Wold-style-definition: YES 00:01:20.494 Compiler for C supports arguments -Wpointer-arith: YES 00:01:20.494 Compiler for C supports arguments -Wsign-compare: YES 00:01:20.494 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:20.494 Compiler for C supports arguments -Wundef: YES 00:01:20.494 Compiler for C supports arguments -Wwrite-strings: YES 00:01:20.494 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:20.494 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:20.494 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:20.494 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:20.494 Program objdump found: YES (/usr/bin/objdump) 00:01:20.494 Compiler for C supports arguments -mavx512f: YES 00:01:20.494 Checking if "AVX512 checking" compiles: YES 00:01:20.494 Fetching value of define "__SSE4_2__" : 1 00:01:20.494 Fetching value of define "__AES__" : 1 00:01:20.494 Fetching value of define "__AVX__" : 1 00:01:20.494 Fetching value of define "__AVX2__" : (undefined) 00:01:20.494 Fetching value of define "__AVX512BW__" : (undefined) 00:01:20.494 Fetching value of define "__AVX512CD__" : (undefined) 00:01:20.494 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:20.494 Fetching value of define "__AVX512F__" : (undefined) 00:01:20.494 Fetching value of define "__AVX512VL__" : (undefined) 00:01:20.494 Fetching value of define "__PCLMUL__" : 1 00:01:20.494 Fetching value of define "__RDRND__" : 1 00:01:20.494 Fetching value of define "__RDSEED__" : (undefined) 00:01:20.494 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:20.494 Fetching value of define "__znver1__" : (undefined) 00:01:20.494 Fetching value of define "__znver2__" : (undefined) 00:01:20.494 Fetching value of define "__znver3__" : (undefined) 00:01:20.494 Fetching value of define "__znver4__" : (undefined) 00:01:20.494 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:20.494 Message: lib/log: Defining dependency "log" 00:01:20.494 Message: lib/kvargs: Defining dependency "kvargs" 00:01:20.494 Message: lib/telemetry: Defining dependency "telemetry" 00:01:20.494 Checking for function "getentropy" : NO 00:01:20.494 Message: lib/eal: Defining dependency "eal" 00:01:20.494 Message: lib/ring: Defining dependency "ring" 00:01:20.494 Message: lib/rcu: Defining dependency "rcu" 00:01:20.494 Message: lib/mempool: Defining dependency "mempool" 00:01:20.494 Message: lib/mbuf: Defining dependency "mbuf" 00:01:20.494 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:20.494 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.494 Compiler for C supports arguments -mpclmul: YES 00:01:20.494 Compiler for C supports arguments -maes: YES 00:01:20.494 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:20.494 Compiler for C supports arguments -mavx512bw: YES 00:01:20.494 Compiler for C supports arguments -mavx512dq: YES 00:01:20.494 Compiler for C supports arguments -mavx512vl: YES 00:01:20.494 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:20.494 Compiler for C supports arguments -mavx2: YES 00:01:20.494 Compiler for C supports arguments -mavx: YES 00:01:20.494 Message: lib/net: Defining dependency "net" 00:01:20.494 Message: lib/meter: Defining dependency "meter" 00:01:20.494 Message: lib/ethdev: Defining dependency "ethdev" 00:01:20.494 Message: lib/pci: Defining dependency "pci" 00:01:20.494 Message: lib/cmdline: Defining dependency "cmdline" 00:01:20.494 Message: lib/metrics: Defining dependency "metrics" 00:01:20.494 Message: lib/hash: Defining dependency "hash" 00:01:20.494 Message: lib/timer: Defining dependency "timer" 00:01:20.494 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.494 Fetching value of define "__AVX512VL__" : (undefined) (cached) 00:01:20.494 Fetching value of define "__AVX512CD__" : (undefined) (cached) 00:01:20.494 Fetching value of define "__AVX512BW__" : (undefined) (cached) 00:01:20.494 Compiler for C supports arguments -mavx512f -mavx512vl -mavx512cd -mavx512bw: YES 00:01:20.494 Message: lib/acl: Defining dependency "acl" 00:01:20.494 Message: lib/bbdev: Defining dependency "bbdev" 00:01:20.494 Message: lib/bitratestats: Defining dependency "bitratestats" 00:01:20.494 Run-time dependency libelf found: YES 0.190 00:01:20.494 Message: lib/bpf: Defining dependency "bpf" 00:01:20.494 Message: lib/cfgfile: Defining dependency "cfgfile" 00:01:20.495 Message: lib/compressdev: Defining dependency "compressdev" 00:01:20.495 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:20.495 Message: lib/distributor: Defining dependency "distributor" 00:01:20.495 Message: lib/dmadev: Defining dependency "dmadev" 00:01:20.495 Message: lib/efd: Defining dependency "efd" 00:01:20.495 Message: lib/eventdev: Defining dependency "eventdev" 00:01:20.495 Message: lib/dispatcher: Defining dependency "dispatcher" 00:01:20.495 Message: lib/gpudev: Defining dependency "gpudev" 00:01:20.495 Message: lib/gro: Defining dependency "gro" 00:01:20.495 Message: lib/gso: Defining dependency "gso" 00:01:20.495 Message: lib/ip_frag: Defining dependency "ip_frag" 00:01:20.495 Message: lib/jobstats: Defining dependency "jobstats" 00:01:20.495 Message: lib/latencystats: Defining dependency "latencystats" 00:01:20.495 Message: lib/lpm: Defining dependency "lpm" 00:01:20.495 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.495 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:20.495 Fetching value of define "__AVX512IFMA__" : (undefined) 00:01:20.495 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:01:20.495 Message: lib/member: Defining dependency "member" 00:01:20.495 Message: lib/pcapng: Defining dependency "pcapng" 00:01:20.495 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:20.495 Message: lib/power: Defining dependency "power" 00:01:20.495 Message: lib/rawdev: Defining dependency "rawdev" 00:01:20.495 Message: lib/regexdev: Defining dependency "regexdev" 00:01:20.495 Message: lib/mldev: Defining dependency "mldev" 00:01:20.495 Message: lib/rib: Defining dependency "rib" 00:01:20.495 Message: lib/reorder: Defining dependency "reorder" 00:01:20.495 Message: lib/sched: Defining dependency "sched" 00:01:20.495 Message: lib/security: Defining dependency "security" 00:01:20.495 Message: lib/stack: Defining dependency "stack" 00:01:20.495 Has header "linux/userfaultfd.h" : YES 00:01:20.495 Has header "linux/vduse.h" : YES 00:01:20.495 Message: lib/vhost: Defining dependency "vhost" 00:01:20.495 Message: lib/ipsec: Defining dependency "ipsec" 00:01:20.495 Message: lib/pdcp: Defining dependency "pdcp" 00:01:20.495 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:20.495 Fetching value of define "__AVX512DQ__" : (undefined) (cached) 00:01:20.495 Compiler for C supports arguments -mavx512f -mavx512dq: YES 00:01:20.495 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:20.495 Message: lib/fib: Defining dependency "fib" 00:01:20.495 Message: lib/port: Defining dependency "port" 00:01:20.495 Message: lib/pdump: Defining dependency "pdump" 00:01:20.495 Message: lib/table: Defining dependency "table" 00:01:20.495 Message: lib/pipeline: Defining dependency "pipeline" 00:01:20.495 Message: lib/graph: Defining dependency "graph" 00:01:20.495 Message: lib/node: Defining dependency "node" 00:01:21.874 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:21.874 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:21.874 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:21.874 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:21.874 Compiler for C supports arguments -Wno-sign-compare: YES 00:01:21.874 Compiler for C supports arguments -Wno-unused-value: YES 00:01:21.874 Compiler for C supports arguments -Wno-format: YES 00:01:21.874 Compiler for C supports arguments -Wno-format-security: YES 00:01:21.874 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:01:21.874 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:01:21.874 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:01:21.874 Compiler for C supports arguments -Wno-unused-parameter: YES 00:01:21.874 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:21.874 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:21.874 Compiler for C supports arguments -mavx512bw: YES (cached) 00:01:21.874 Compiler for C supports arguments -march=skylake-avx512: YES 00:01:21.874 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:01:21.874 Has header "sys/epoll.h" : YES 00:01:21.874 Program doxygen found: YES (/usr/bin/doxygen) 00:01:21.874 Configuring doxy-api-html.conf using configuration 00:01:21.874 Configuring doxy-api-man.conf using configuration 00:01:21.874 Program mandb found: YES (/usr/bin/mandb) 00:01:21.874 Program sphinx-build found: NO 00:01:21.874 Configuring rte_build_config.h using configuration 00:01:21.874 Message: 00:01:21.874 ================= 00:01:21.874 Applications Enabled 00:01:21.874 ================= 00:01:21.874 00:01:21.874 apps: 00:01:21.874 dumpcap, graph, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, 00:01:21.874 test-crypto-perf, test-dma-perf, test-eventdev, test-fib, test-flow-perf, test-gpudev, test-mldev, test-pipeline, 00:01:21.875 test-pmd, test-regex, test-sad, test-security-perf, 00:01:21.875 00:01:21.875 Message: 00:01:21.875 ================= 00:01:21.875 Libraries Enabled 00:01:21.875 ================= 00:01:21.875 00:01:21.875 libs: 00:01:21.875 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:21.875 net, meter, ethdev, pci, cmdline, metrics, hash, timer, 00:01:21.875 acl, bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, 00:01:21.875 dmadev, efd, eventdev, dispatcher, gpudev, gro, gso, ip_frag, 00:01:21.875 jobstats, latencystats, lpm, member, pcapng, power, rawdev, regexdev, 00:01:21.875 mldev, rib, reorder, sched, security, stack, vhost, ipsec, 00:01:21.875 pdcp, fib, port, pdump, table, pipeline, graph, node, 00:01:21.875 00:01:21.875 00:01:21.875 Message: 00:01:21.875 =============== 00:01:21.875 Drivers Enabled 00:01:21.875 =============== 00:01:21.875 00:01:21.875 common: 00:01:21.875 00:01:21.875 bus: 00:01:21.875 pci, vdev, 00:01:21.875 mempool: 00:01:21.875 ring, 00:01:21.875 dma: 00:01:21.875 00:01:21.875 net: 00:01:21.875 i40e, 00:01:21.875 raw: 00:01:21.875 00:01:21.875 crypto: 00:01:21.875 00:01:21.875 compress: 00:01:21.875 00:01:21.875 regex: 00:01:21.875 00:01:21.875 ml: 00:01:21.875 00:01:21.875 vdpa: 00:01:21.875 00:01:21.875 event: 00:01:21.875 00:01:21.875 baseband: 00:01:21.875 00:01:21.875 gpu: 00:01:21.875 00:01:21.875 00:01:21.875 Message: 00:01:21.875 ================= 00:01:21.875 Content Skipped 00:01:21.875 ================= 00:01:21.875 00:01:21.875 apps: 00:01:21.875 00:01:21.875 libs: 00:01:21.875 00:01:21.875 drivers: 00:01:21.875 common/cpt: not in enabled drivers build config 00:01:21.875 common/dpaax: not in enabled drivers build config 00:01:21.875 common/iavf: not in enabled drivers build config 00:01:21.875 common/idpf: not in enabled drivers build config 00:01:21.875 common/mvep: not in enabled drivers build config 00:01:21.875 common/octeontx: not in enabled drivers build config 00:01:21.875 bus/auxiliary: not in enabled drivers build config 00:01:21.875 bus/cdx: not in enabled drivers build config 00:01:21.875 bus/dpaa: not in enabled drivers build config 00:01:21.875 bus/fslmc: not in enabled drivers build config 00:01:21.875 bus/ifpga: not in enabled drivers build config 00:01:21.875 bus/platform: not in enabled drivers build config 00:01:21.875 bus/vmbus: not in enabled drivers build config 00:01:21.875 common/cnxk: not in enabled drivers build config 00:01:21.875 common/mlx5: not in enabled drivers build config 00:01:21.875 common/nfp: not in enabled drivers build config 00:01:21.875 common/qat: not in enabled drivers build config 00:01:21.875 common/sfc_efx: not in enabled drivers build config 00:01:21.875 mempool/bucket: not in enabled drivers build config 00:01:21.875 mempool/cnxk: not in enabled drivers build config 00:01:21.875 mempool/dpaa: not in enabled drivers build config 00:01:21.875 mempool/dpaa2: not in enabled drivers build config 00:01:21.875 mempool/octeontx: not in enabled drivers build config 00:01:21.875 mempool/stack: not in enabled drivers build config 00:01:21.875 dma/cnxk: not in enabled drivers build config 00:01:21.875 dma/dpaa: not in enabled drivers build config 00:01:21.875 dma/dpaa2: not in enabled drivers build config 00:01:21.875 dma/hisilicon: not in enabled drivers build config 00:01:21.875 dma/idxd: not in enabled drivers build config 00:01:21.875 dma/ioat: not in enabled drivers build config 00:01:21.875 dma/skeleton: not in enabled drivers build config 00:01:21.875 net/af_packet: not in enabled drivers build config 00:01:21.875 net/af_xdp: not in enabled drivers build config 00:01:21.875 net/ark: not in enabled drivers build config 00:01:21.875 net/atlantic: not in enabled drivers build config 00:01:21.875 net/avp: not in enabled drivers build config 00:01:21.875 net/axgbe: not in enabled drivers build config 00:01:21.875 net/bnx2x: not in enabled drivers build config 00:01:21.875 net/bnxt: not in enabled drivers build config 00:01:21.875 net/bonding: not in enabled drivers build config 00:01:21.875 net/cnxk: not in enabled drivers build config 00:01:21.875 net/cpfl: not in enabled drivers build config 00:01:21.875 net/cxgbe: not in enabled drivers build config 00:01:21.875 net/dpaa: not in enabled drivers build config 00:01:21.875 net/dpaa2: not in enabled drivers build config 00:01:21.875 net/e1000: not in enabled drivers build config 00:01:21.875 net/ena: not in enabled drivers build config 00:01:21.875 net/enetc: not in enabled drivers build config 00:01:21.875 net/enetfec: not in enabled drivers build config 00:01:21.875 net/enic: not in enabled drivers build config 00:01:21.875 net/failsafe: not in enabled drivers build config 00:01:21.875 net/fm10k: not in enabled drivers build config 00:01:21.875 net/gve: not in enabled drivers build config 00:01:21.875 net/hinic: not in enabled drivers build config 00:01:21.875 net/hns3: not in enabled drivers build config 00:01:21.875 net/iavf: not in enabled drivers build config 00:01:21.875 net/ice: not in enabled drivers build config 00:01:21.875 net/idpf: not in enabled drivers build config 00:01:21.875 net/igc: not in enabled drivers build config 00:01:21.875 net/ionic: not in enabled drivers build config 00:01:21.875 net/ipn3ke: not in enabled drivers build config 00:01:21.875 net/ixgbe: not in enabled drivers build config 00:01:21.875 net/mana: not in enabled drivers build config 00:01:21.875 net/memif: not in enabled drivers build config 00:01:21.875 net/mlx4: not in enabled drivers build config 00:01:21.875 net/mlx5: not in enabled drivers build config 00:01:21.875 net/mvneta: not in enabled drivers build config 00:01:21.875 net/mvpp2: not in enabled drivers build config 00:01:21.875 net/netvsc: not in enabled drivers build config 00:01:21.875 net/nfb: not in enabled drivers build config 00:01:21.875 net/nfp: not in enabled drivers build config 00:01:21.875 net/ngbe: not in enabled drivers build config 00:01:21.875 net/null: not in enabled drivers build config 00:01:21.875 net/octeontx: not in enabled drivers build config 00:01:21.875 net/octeon_ep: not in enabled drivers build config 00:01:21.875 net/pcap: not in enabled drivers build config 00:01:21.875 net/pfe: not in enabled drivers build config 00:01:21.875 net/qede: not in enabled drivers build config 00:01:21.875 net/ring: not in enabled drivers build config 00:01:21.875 net/sfc: not in enabled drivers build config 00:01:21.875 net/softnic: not in enabled drivers build config 00:01:21.875 net/tap: not in enabled drivers build config 00:01:21.875 net/thunderx: not in enabled drivers build config 00:01:21.875 net/txgbe: not in enabled drivers build config 00:01:21.875 net/vdev_netvsc: not in enabled drivers build config 00:01:21.875 net/vhost: not in enabled drivers build config 00:01:21.875 net/virtio: not in enabled drivers build config 00:01:21.875 net/vmxnet3: not in enabled drivers build config 00:01:21.875 raw/cnxk_bphy: not in enabled drivers build config 00:01:21.875 raw/cnxk_gpio: not in enabled drivers build config 00:01:21.875 raw/dpaa2_cmdif: not in enabled drivers build config 00:01:21.875 raw/ifpga: not in enabled drivers build config 00:01:21.875 raw/ntb: not in enabled drivers build config 00:01:21.875 raw/skeleton: not in enabled drivers build config 00:01:21.875 crypto/armv8: not in enabled drivers build config 00:01:21.875 crypto/bcmfs: not in enabled drivers build config 00:01:21.875 crypto/caam_jr: not in enabled drivers build config 00:01:21.875 crypto/ccp: not in enabled drivers build config 00:01:21.875 crypto/cnxk: not in enabled drivers build config 00:01:21.875 crypto/dpaa_sec: not in enabled drivers build config 00:01:21.875 crypto/dpaa2_sec: not in enabled drivers build config 00:01:21.875 crypto/ipsec_mb: not in enabled drivers build config 00:01:21.875 crypto/mlx5: not in enabled drivers build config 00:01:21.875 crypto/mvsam: not in enabled drivers build config 00:01:21.875 crypto/nitrox: not in enabled drivers build config 00:01:21.875 crypto/null: not in enabled drivers build config 00:01:21.875 crypto/octeontx: not in enabled drivers build config 00:01:21.875 crypto/openssl: not in enabled drivers build config 00:01:21.875 crypto/scheduler: not in enabled drivers build config 00:01:21.875 crypto/uadk: not in enabled drivers build config 00:01:21.875 crypto/virtio: not in enabled drivers build config 00:01:21.875 compress/isal: not in enabled drivers build config 00:01:21.875 compress/mlx5: not in enabled drivers build config 00:01:21.875 compress/octeontx: not in enabled drivers build config 00:01:21.875 compress/zlib: not in enabled drivers build config 00:01:21.875 regex/mlx5: not in enabled drivers build config 00:01:21.875 regex/cn9k: not in enabled drivers build config 00:01:21.875 ml/cnxk: not in enabled drivers build config 00:01:21.875 vdpa/ifc: not in enabled drivers build config 00:01:21.875 vdpa/mlx5: not in enabled drivers build config 00:01:21.875 vdpa/nfp: not in enabled drivers build config 00:01:21.875 vdpa/sfc: not in enabled drivers build config 00:01:21.875 event/cnxk: not in enabled drivers build config 00:01:21.875 event/dlb2: not in enabled drivers build config 00:01:21.875 event/dpaa: not in enabled drivers build config 00:01:21.875 event/dpaa2: not in enabled drivers build config 00:01:21.875 event/dsw: not in enabled drivers build config 00:01:21.875 event/opdl: not in enabled drivers build config 00:01:21.875 event/skeleton: not in enabled drivers build config 00:01:21.875 event/sw: not in enabled drivers build config 00:01:21.875 event/octeontx: not in enabled drivers build config 00:01:21.875 baseband/acc: not in enabled drivers build config 00:01:21.875 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:01:21.875 baseband/fpga_lte_fec: not in enabled drivers build config 00:01:21.875 baseband/la12xx: not in enabled drivers build config 00:01:21.875 baseband/null: not in enabled drivers build config 00:01:21.875 baseband/turbo_sw: not in enabled drivers build config 00:01:21.875 gpu/cuda: not in enabled drivers build config 00:01:21.875 00:01:21.875 00:01:21.875 Build targets in project: 220 00:01:21.875 00:01:21.875 DPDK 23.11.0 00:01:21.875 00:01:21.875 User defined options 00:01:21.875 libdir : lib 00:01:21.875 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:01:21.875 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:01:21.875 c_link_args : 00:01:21.875 enable_docs : false 00:01:21.875 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base, 00:01:21.875 enable_kmods : false 00:01:21.875 machine : native 00:01:21.875 tests : false 00:01:21.875 00:01:21.875 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:21.875 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:01:21.876 21:07:56 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 00:01:21.876 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:01:21.876 [1/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:21.876 [2/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:21.876 [3/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:21.876 [4/710] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:21.876 [5/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:21.876 [6/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:21.876 [7/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:21.876 [8/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:21.876 [9/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:21.876 [10/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:21.876 [11/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:21.876 [12/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:22.134 [13/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:22.134 [14/710] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:22.134 [15/710] Linking static target lib/librte_kvargs.a 00:01:22.134 [16/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:22.134 [17/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:22.134 [18/710] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:22.134 [19/710] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:22.134 [20/710] Linking static target lib/librte_log.a 00:01:22.394 [21/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:22.394 [22/710] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.970 [23/710] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:22.970 [24/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:22.970 [25/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:22.970 [26/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:22.970 [27/710] Linking target lib/librte_log.so.24.0 00:01:22.970 [28/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:22.970 [29/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:22.970 [30/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:22.970 [31/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:22.970 [32/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:22.970 [33/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:22.970 [34/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:22.970 [35/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:22.970 [36/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:22.970 [37/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:22.970 [38/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:22.970 [39/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:22.970 [40/710] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:22.970 [41/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:22.970 [42/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:22.970 [43/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:22.970 [44/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:23.234 [45/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:23.234 [46/710] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:23.234 [47/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:23.234 [48/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:23.234 [49/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:23.234 [50/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:23.234 [51/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:23.234 [52/710] Generating symbol file lib/librte_log.so.24.0.p/librte_log.so.24.0.symbols 00:01:23.234 [53/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:23.234 [54/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:23.234 [55/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:23.234 [56/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:23.234 [57/710] Linking target lib/librte_kvargs.so.24.0 00:01:23.234 [58/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:23.234 [59/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:23.234 [60/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:23.234 [61/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:23.234 [62/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:23.492 [63/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:23.492 [64/710] Generating symbol file lib/librte_kvargs.so.24.0.p/librte_kvargs.so.24.0.symbols 00:01:23.492 [65/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:23.492 [66/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:23.752 [67/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:23.752 [68/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:23.752 [69/710] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:23.752 [70/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:23.752 [71/710] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:23.752 [72/710] Linking static target lib/librte_pci.a 00:01:23.752 [73/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:23.752 [74/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:23.752 [75/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:24.013 [76/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:24.013 [77/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:24.013 [78/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:24.013 [79/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:24.013 [80/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:24.013 [81/710] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.013 [82/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:24.280 [83/710] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:24.280 [84/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:24.280 [85/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:24.280 [86/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:24.280 [87/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:24.280 [88/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:24.280 [89/710] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:24.280 [90/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:24.280 [91/710] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:24.280 [92/710] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:24.280 [93/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:24.280 [94/710] Linking static target lib/librte_ring.a 00:01:24.280 [95/710] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:24.280 [96/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:24.280 [97/710] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:24.280 [98/710] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:24.280 [99/710] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:24.280 [100/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:24.280 [101/710] Linking static target lib/librte_meter.a 00:01:24.543 [102/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:24.543 [103/710] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:24.543 [104/710] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:24.543 [105/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:24.543 [106/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:24.543 [107/710] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:24.543 [108/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:24.543 [109/710] Linking static target lib/librte_telemetry.a 00:01:24.543 [110/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:24.543 [111/710] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:24.543 [112/710] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:24.543 [113/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:24.806 [114/710] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.806 [115/710] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:24.806 [116/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:24.806 [117/710] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:24.806 [118/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:24.806 [119/710] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:24.807 [120/710] Linking static target lib/librte_eal.a 00:01:24.807 [121/710] Linking static target lib/librte_net.a 00:01:24.807 [122/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:24.807 [123/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:24.807 [124/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:25.070 [125/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:25.070 [126/710] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:25.070 [127/710] Linking static target lib/librte_cmdline.a 00:01:25.070 [128/710] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:25.070 [129/710] Linking static target lib/librte_mempool.a 00:01:25.333 [130/710] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:01:25.333 [131/710] Linking static target lib/librte_cfgfile.a 00:01:25.333 [132/710] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.333 [133/710] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:25.333 [134/710] Linking target lib/librte_telemetry.so.24.0 00:01:25.333 [135/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:01:25.333 [136/710] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.333 [137/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:25.333 [138/710] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:01:25.333 [139/710] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:25.333 [140/710] Linking static target lib/librte_metrics.a 00:01:25.333 [141/710] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:01:25.598 [142/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:25.598 [143/710] Generating symbol file lib/librte_telemetry.so.24.0.p/librte_telemetry.so.24.0.symbols 00:01:25.598 [144/710] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:01:25.598 [145/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:01:25.598 [146/710] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:25.598 [147/710] Linking static target lib/librte_rcu.a 00:01:25.598 [148/710] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:01:25.598 [149/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:01:25.598 [150/710] Linking static target lib/librte_bitratestats.a 00:01:25.859 [151/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:01:25.859 [152/710] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:01:25.859 [153/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:01:25.859 [154/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:25.859 [155/710] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:01:25.859 [156/710] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.124 [157/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:01:26.124 [158/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:01:26.124 [159/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:26.124 [160/710] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:26.124 [161/710] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.124 [162/710] Linking static target lib/librte_timer.a 00:01:26.124 [163/710] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.124 [164/710] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.124 [165/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:26.124 [166/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:01:26.383 [167/710] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:01:26.383 [168/710] Linking static target lib/librte_bbdev.a 00:01:26.383 [169/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:01:26.383 [170/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:26.383 [171/710] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.646 [172/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:26.646 [173/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:01:26.646 [174/710] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:26.646 [175/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:26.646 [176/710] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:26.646 [177/710] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:26.646 [178/710] Linking static target lib/librte_compressdev.a 00:01:26.646 [179/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:01:26.907 [180/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:01:26.907 [181/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:01:26.907 [182/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:01:27.172 [183/710] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:27.172 [184/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:01:27.172 [185/710] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:01:27.172 [186/710] Linking static target lib/librte_distributor.a 00:01:27.172 [187/710] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.172 [188/710] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:01:27.172 [189/710] Linking static target lib/librte_bpf.a 00:01:27.172 [190/710] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:27.436 [191/710] Linking static target lib/librte_dmadev.a 00:01:27.436 [192/710] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.436 [193/710] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:01:27.436 [194/710] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:01:27.436 [195/710] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:01:27.436 [196/710] Compiling C object lib/librte_dispatcher.a.p/dispatcher_rte_dispatcher.c.o 00:01:27.436 [197/710] Linking static target lib/librte_dispatcher.a 00:01:27.436 [198/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:01:27.699 [199/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:01:27.699 [200/710] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.699 [201/710] Compiling C object lib/librte_gro.a.p/gro_gro_tcp6.c.o 00:01:27.699 [202/710] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:01:27.699 [203/710] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:01:27.699 [204/710] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:01:27.699 [205/710] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:01:27.699 [206/710] Linking static target lib/librte_gpudev.a 00:01:27.699 [207/710] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:27.699 [208/710] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:27.699 [209/710] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:01:27.699 [210/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:01:27.699 [211/710] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.958 [212/710] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:01:27.958 [213/710] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:27.958 [214/710] Linking static target lib/librte_gro.a 00:01:27.958 [215/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:01:27.958 [216/710] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:27.958 [217/710] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:01:27.958 [218/710] Linking static target lib/librte_jobstats.a 00:01:27.958 [219/710] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:01:28.220 [220/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:01:28.220 [221/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_dma_adapter.c.o 00:01:28.220 [222/710] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.220 [223/710] Generating lib/dispatcher.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.483 [224/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:01:28.483 [225/710] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:28.483 [226/710] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:01:28.483 [227/710] Linking static target lib/librte_latencystats.a 00:01:28.483 [228/710] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:01:28.483 [229/710] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:01:28.483 [230/710] Linking static target lib/member/libsketch_avx512_tmp.a 00:01:28.483 [231/710] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:01:28.483 [232/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:01:28.483 [233/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:01:28.743 [234/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:01:28.743 [235/710] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:01:28.743 [236/710] Linking static target lib/librte_ip_frag.a 00:01:28.743 [237/710] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:01:28.743 [238/710] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.003 [239/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:01:29.003 [240/710] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:29.003 [241/710] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:29.003 [242/710] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.003 [243/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev_pmd.c.o 00:01:29.003 [244/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:01:29.003 [245/710] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:29.265 [246/710] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.265 [247/710] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:01:29.265 [248/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils.c.o 00:01:29.265 [249/710] Linking static target lib/librte_gso.a 00:01:29.265 [250/710] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:01:29.265 [251/710] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:29.530 [252/710] Linking static target lib/librte_regexdev.a 00:01:29.530 [253/710] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:01:29.530 [254/710] Linking static target lib/librte_rawdev.a 00:01:29.530 [255/710] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:01:29.530 [256/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar_bfloat16.c.o 00:01:29.530 [257/710] Compiling C object lib/librte_mldev.a.p/mldev_rte_mldev.c.o 00:01:29.530 [258/710] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:01:29.530 [259/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:01:29.530 [260/710] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:01:29.791 [261/710] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:01:29.791 [262/710] Linking static target lib/librte_efd.a 00:01:29.791 [263/710] Compiling C object lib/librte_mldev.a.p/mldev_mldev_utils_scalar.c.o 00:01:29.791 [264/710] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:01:29.791 [265/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:01:29.791 [266/710] Linking static target lib/librte_mldev.a 00:01:29.791 [267/710] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:01:29.791 [268/710] Linking static target lib/librte_pcapng.a 00:01:29.791 [269/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:01:30.052 [270/710] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:01:30.052 [271/710] Linking static target lib/librte_stack.a 00:01:30.052 [272/710] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:01:30.052 [273/710] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:01:30.052 [274/710] Linking static target lib/librte_lpm.a 00:01:30.052 [275/710] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:30.052 [276/710] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:30.052 [277/710] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.052 [278/710] Compiling C object lib/acl/libavx2_tmp.a.p/acl_run_avx2.c.o 00:01:30.052 [279/710] Linking static target lib/acl/libavx2_tmp.a 00:01:30.052 [280/710] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:30.052 [281/710] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.052 [282/710] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:30.317 [283/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:01:30.317 [284/710] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.317 [285/710] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:30.317 [286/710] Linking static target lib/librte_hash.a 00:01:30.317 [287/710] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.595 [288/710] Compiling C object lib/acl/libavx512_tmp.a.p/acl_run_avx512.c.o 00:01:30.595 [289/710] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:30.595 [290/710] Linking static target lib/acl/libavx512_tmp.a 00:01:30.595 [291/710] Linking static target lib/librte_reorder.a 00:01:30.595 [292/710] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:30.595 [293/710] Linking static target lib/librte_acl.a 00:01:30.595 [294/710] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:30.595 [295/710] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:30.595 [296/710] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.595 [297/710] Linking static target lib/librte_power.a 00:01:30.595 [298/710] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:30.595 [299/710] Linking static target lib/librte_security.a 00:01:30.595 [300/710] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.882 [301/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:01:30.882 [302/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:30.882 [303/710] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:30.882 [304/710] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:30.882 [305/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:01:30.882 [306/710] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:01:30.882 [307/710] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:01:30.882 [308/710] Linking static target lib/librte_rib.a 00:01:30.882 [309/710] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:30.882 [310/710] Linking static target lib/librte_mbuf.a 00:01:30.882 [311/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_ctrl_pdu.c.o 00:01:30.882 [312/710] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.153 [313/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_crypto.c.o 00:01:31.153 [314/710] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.153 [315/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_cnt.c.o 00:01:31.153 [316/710] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:01:31.153 [317/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_reorder.c.o 00:01:31.153 [318/710] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.418 [319/710] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:01:31.418 [320/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:01:31.418 [321/710] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:01:31.418 [322/710] Compiling C object lib/fib/libtrie_avx512_tmp.a.p/trie_avx512.c.o 00:01:31.418 [323/710] Linking static target lib/fib/libtrie_avx512_tmp.a 00:01:31.418 [324/710] Compiling C object lib/fib/libdir24_8_avx512_tmp.a.p/dir24_8_avx512.c.o 00:01:31.418 [325/710] Linking static target lib/fib/libdir24_8_avx512_tmp.a 00:01:31.418 [326/710] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.685 [327/710] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.685 [328/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:31.685 [329/710] Generating lib/mldev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.685 [330/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:01:31.685 [331/710] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:31.944 [332/710] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:01:31.944 [333/710] Compiling C object lib/librte_pdcp.a.p/pdcp_rte_pdcp.c.o 00:01:31.944 [334/710] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:01:31.944 [335/710] Linking static target lib/librte_eventdev.a 00:01:32.205 [336/710] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:01:32.205 [337/710] Linking static target lib/librte_member.a 00:01:32.205 [338/710] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:01:32.205 [339/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.467 [340/710] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:01:32.467 [341/710] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.467 [342/710] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:01:32.467 [343/710] Linking static target lib/librte_cryptodev.a 00:01:32.467 [344/710] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:01:32.467 [345/710] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:01:32.467 [346/710] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:01:32.467 [347/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:01:32.467 [348/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:01:32.467 [349/710] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:01:32.467 [350/710] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:01:32.730 [351/710] Linking static target lib/librte_sched.a 00:01:32.730 [352/710] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.730 [353/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:01:32.730 [354/710] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:01:32.730 [355/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:01:32.730 [356/710] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:01:32.730 [357/710] Linking static target lib/librte_fib.a 00:01:32.730 [358/710] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:32.730 [359/710] Linking static target lib/librte_ethdev.a 00:01:32.730 [360/710] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:01:32.998 [361/710] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:01:32.998 [362/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:01:32.998 [363/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:01:32.998 [364/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:01:32.998 [365/710] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:01:33.259 [366/710] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:01:33.259 [367/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:33.259 [368/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:01:33.259 [369/710] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:01:33.259 [370/710] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.259 [371/710] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.259 [372/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:01:33.259 [373/710] Compiling C object lib/librte_node.a.p/node_null.c.o 00:01:33.525 [374/710] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:01:33.525 [375/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:01:33.525 [376/710] Linking static target lib/librte_pdump.a 00:01:33.525 [377/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:01:33.787 [378/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:33.787 [379/710] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:01:33.788 [380/710] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:01:33.788 [381/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_worker.c.o 00:01:33.788 [382/710] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:01:33.788 [383/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:01:33.788 [384/710] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:01:34.054 [385/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:01:34.054 [386/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:34.054 [387/710] Compiling C object lib/librte_graph.a.p/graph_graph_pcap.c.o 00:01:34.054 [388/710] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:01:34.054 [389/710] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.054 [390/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:01:34.054 [391/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:01:34.320 [392/710] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:01:34.320 [393/710] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:01:34.320 [394/710] Linking static target lib/librte_ipsec.a 00:01:34.320 [395/710] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:01:34.320 [396/710] Linking static target lib/librte_table.a 00:01:34.320 [397/710] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.320 [398/710] Compiling C object lib/librte_node.a.p/node_log.c.o 00:01:34.581 [399/710] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:01:34.581 [400/710] Compiling C object lib/librte_node.a.p/node_kernel_tx.c.o 00:01:34.847 [401/710] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.847 [402/710] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:01:35.115 [403/710] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:01:35.115 [404/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:35.115 [405/710] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:01:35.115 [406/710] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:01:35.115 [407/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:35.376 [408/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:35.376 [409/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:01:35.376 [410/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:35.376 [411/710] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:35.376 [412/710] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:35.376 [413/710] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:01:35.376 [414/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ipsec.c.o 00:01:35.376 [415/710] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.637 [416/710] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.637 [417/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:01:35.637 [418/710] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:35.637 [419/710] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:35.637 [420/710] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:35.903 [421/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:01:35.903 [422/710] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.903 [423/710] Linking static target drivers/librte_bus_vdev.a 00:01:35.903 [424/710] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:35.903 [425/710] Compiling C object drivers/librte_bus_vdev.so.24.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:35.903 [426/710] Linking target lib/librte_eal.so.24.0 00:01:35.903 [427/710] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:01:35.903 [428/710] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:35.903 [429/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:01:36.168 [430/710] Linking static target lib/librte_port.a 00:01:36.168 [431/710] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.168 [432/710] Compiling C object drivers/librte_bus_pci.so.24.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:36.168 [433/710] Linking static target drivers/librte_bus_pci.a 00:01:36.168 [434/710] Compiling C object lib/librte_graph.a.p/graph_rte_graph_model_mcore_dispatch.c.o 00:01:36.168 [435/710] Linking static target lib/librte_graph.a 00:01:36.168 [436/710] Generating symbol file lib/librte_eal.so.24.0.p/librte_eal.so.24.0.symbols 00:01:36.168 [437/710] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.168 [438/710] Compiling C object lib/librte_node.a.p/node_kernel_rx.c.o 00:01:36.168 [439/710] Linking target lib/librte_meter.so.24.0 00:01:36.168 [440/710] Linking target lib/librte_ring.so.24.0 00:01:36.434 [441/710] Compiling C object lib/librte_node.a.p/node_ip4_reassembly.c.o 00:01:36.434 [442/710] Linking target lib/librte_pci.so.24.0 00:01:36.434 [443/710] Linking target lib/librte_timer.so.24.0 00:01:36.434 [444/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:01:36.434 [445/710] Compiling C object app/dpdk-graph.p/graph_cli.c.o 00:01:36.434 [446/710] Linking target lib/librte_acl.so.24.0 00:01:36.434 [447/710] Generating symbol file lib/librte_meter.so.24.0.p/librte_meter.so.24.0.symbols 00:01:36.434 [448/710] Generating symbol file lib/librte_ring.so.24.0.p/librte_ring.so.24.0.symbols 00:01:36.434 [449/710] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:36.434 [450/710] Linking target lib/librte_cfgfile.so.24.0 00:01:36.700 [451/710] Generating symbol file lib/librte_pci.so.24.0.p/librte_pci.so.24.0.symbols 00:01:36.700 [452/710] Linking target lib/librte_dmadev.so.24.0 00:01:36.700 [453/710] Linking target lib/librte_rcu.so.24.0 00:01:36.700 [454/710] Linking target lib/librte_jobstats.so.24.0 00:01:36.700 [455/710] Generating symbol file lib/librte_timer.so.24.0.p/librte_timer.so.24.0.symbols 00:01:36.700 [456/710] Linking target lib/librte_mempool.so.24.0 00:01:36.700 [457/710] Linking target lib/librte_rawdev.so.24.0 00:01:36.700 [458/710] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:36.700 [459/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:01:36.700 [460/710] Linking target drivers/librte_bus_vdev.so.24.0 00:01:36.700 [461/710] Linking target lib/librte_stack.so.24.0 00:01:36.700 [462/710] Compiling C object app/dpdk-graph.p/graph_conn.c.o 00:01:36.700 [463/710] Compiling C object app/dpdk-graph.p/graph_ethdev_rx.c.o 00:01:36.700 [464/710] Generating symbol file lib/librte_acl.so.24.0.p/librte_acl.so.24.0.symbols 00:01:36.960 [465/710] Compiling C object app/dpdk-graph.p/graph_ip4_route.c.o 00:01:36.960 [466/710] Generating symbol file lib/librte_dmadev.so.24.0.p/librte_dmadev.so.24.0.symbols 00:01:36.960 [467/710] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.960 [468/710] Generating symbol file lib/librte_rcu.so.24.0.p/librte_rcu.so.24.0.symbols 00:01:36.960 [469/710] Generating symbol file lib/librte_mempool.so.24.0.p/librte_mempool.so.24.0.symbols 00:01:36.960 [470/710] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.960 [471/710] Generating symbol file drivers/librte_bus_vdev.so.24.0.p/librte_bus_vdev.so.24.0.symbols 00:01:36.960 [472/710] Linking target drivers/librte_bus_pci.so.24.0 00:01:36.960 [473/710] Linking target lib/librte_mbuf.so.24.0 00:01:36.960 [474/710] Linking target lib/librte_rib.so.24.0 00:01:36.960 [475/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:01:36.960 [476/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:01:36.960 [477/710] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:37.221 [478/710] Compiling C object app/dpdk-graph.p/graph_ip6_route.c.o 00:01:37.221 [479/710] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.221 [480/710] Linking static target drivers/librte_mempool_ring.a 00:01:37.221 [481/710] Compiling C object drivers/librte_mempool_ring.so.24.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:37.221 [482/710] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:01:37.221 [483/710] Compiling C object app/dpdk-graph.p/graph_graph.c.o 00:01:37.221 [484/710] Generating symbol file drivers/librte_bus_pci.so.24.0.p/librte_bus_pci.so.24.0.symbols 00:01:37.221 [485/710] Generating symbol file lib/librte_mbuf.so.24.0.p/librte_mbuf.so.24.0.symbols 00:01:37.221 [486/710] Linking target drivers/librte_mempool_ring.so.24.0 00:01:37.221 [487/710] Generating symbol file lib/librte_rib.so.24.0.p/librte_rib.so.24.0.symbols 00:01:37.221 [488/710] Compiling C object app/dpdk-graph.p/graph_ethdev.c.o 00:01:37.221 [489/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:01:37.221 [490/710] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:01:37.221 [491/710] Linking target lib/librte_bbdev.so.24.0 00:01:37.221 [492/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:01:37.221 [493/710] Linking target lib/librte_net.so.24.0 00:01:37.221 [494/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_recycle_mbufs_vec_common.c.o 00:01:37.221 [495/710] Linking target lib/librte_compressdev.so.24.0 00:01:37.221 [496/710] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:01:37.221 [497/710] Linking target lib/librte_distributor.so.24.0 00:01:37.221 [498/710] Linking target lib/librte_cryptodev.so.24.0 00:01:37.221 [499/710] Linking target lib/librte_gpudev.so.24.0 00:01:37.221 [500/710] Linking target lib/librte_regexdev.so.24.0 00:01:37.221 [501/710] Linking target lib/librte_mldev.so.24.0 00:01:37.221 [502/710] Linking target lib/librte_reorder.so.24.0 00:01:37.489 [503/710] Linking target lib/librte_sched.so.24.0 00:01:37.489 [504/710] Linking target lib/librte_fib.so.24.0 00:01:37.489 [505/710] Compiling C object app/dpdk-graph.p/graph_l3fwd.c.o 00:01:37.489 [506/710] Generating symbol file lib/librte_net.so.24.0.p/librte_net.so.24.0.symbols 00:01:37.489 [507/710] Generating symbol file lib/librte_cryptodev.so.24.0.p/librte_cryptodev.so.24.0.symbols 00:01:37.489 [508/710] Generating symbol file lib/librte_reorder.so.24.0.p/librte_reorder.so.24.0.symbols 00:01:37.489 [509/710] Generating symbol file lib/librte_sched.so.24.0.p/librte_sched.so.24.0.symbols 00:01:37.489 [510/710] Linking target lib/librte_cmdline.so.24.0 00:01:37.489 [511/710] Compiling C object lib/librte_node.a.p/node_ip4_local.c.o 00:01:37.755 [512/710] Linking target lib/librte_hash.so.24.0 00:01:37.755 [513/710] Linking target lib/librte_security.so.24.0 00:01:37.755 [514/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:01:38.014 [515/710] Generating symbol file lib/librte_security.so.24.0.p/librte_security.so.24.0.symbols 00:01:38.014 [516/710] Generating symbol file lib/librte_hash.so.24.0.p/librte_hash.so.24.0.symbols 00:01:38.014 [517/710] Compiling C object app/dpdk-graph.p/graph_main.c.o 00:01:38.014 [518/710] Compiling C object app/dpdk-graph.p/graph_mempool.c.o 00:01:38.014 [519/710] Linking target lib/librte_efd.so.24.0 00:01:38.014 [520/710] Compiling C object app/dpdk-graph.p/graph_utils.c.o 00:01:38.014 [521/710] Linking target lib/librte_lpm.so.24.0 00:01:38.014 [522/710] Linking target lib/librte_member.so.24.0 00:01:38.014 [523/710] Linking target lib/librte_ipsec.so.24.0 00:01:38.277 [524/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:01:38.278 [525/710] Generating symbol file lib/librte_lpm.so.24.0.p/librte_lpm.so.24.0.symbols 00:01:38.278 [526/710] Compiling C object lib/librte_node.a.p/node_udp4_input.c.o 00:01:38.278 [527/710] Generating symbol file lib/librte_ipsec.so.24.0.p/librte_ipsec.so.24.0.symbols 00:01:38.278 [528/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:01:38.278 [529/710] Compiling C object app/dpdk-graph.p/graph_neigh.c.o 00:01:38.278 [530/710] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:01:38.278 [531/710] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:01:38.537 [532/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:01:38.797 [533/710] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:01:38.797 [534/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:01:38.797 [535/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:01:38.797 [536/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:01:38.797 [537/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:01:38.797 [538/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:01:38.797 [539/710] Compiling C object drivers/net/i40e/libi40e_avx2_lib.a.p/i40e_rxtx_vec_avx2.c.o 00:01:38.797 [540/710] Linking static target drivers/net/i40e/libi40e_avx2_lib.a 00:01:38.797 [541/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:01:39.061 [542/710] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:01:39.321 [543/710] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:01:39.321 [544/710] Compiling C object lib/librte_node.a.p/node_ip6_lookup.c.o 00:01:39.321 [545/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:01:39.321 [546/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:01:39.321 [547/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:01:39.321 [548/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:01:39.321 [549/710] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:01:39.322 [550/710] Linking static target drivers/net/i40e/base/libi40e_base.a 00:01:39.585 [551/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:01:39.585 [552/710] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:01:39.585 [553/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_test.c.o 00:01:39.585 [554/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_main.c.o 00:01:39.848 [555/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:01:39.848 [556/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:01:39.848 [557/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:01:39.848 [558/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:01:39.848 [559/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_parser.c.o 00:01:40.420 [560/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:01:40.683 [561/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_main.c.o 00:01:40.683 [562/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:01:40.683 [563/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:01:40.683 [564/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:01:40.683 [565/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:01:40.944 [566/710] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:01:40.944 [567/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_ml_options.c.o 00:01:40.944 [568/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:01:40.944 [569/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_device_ops.c.o 00:01:40.944 [570/710] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:40.944 [571/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_common.c.o 00:01:41.210 [572/710] Linking target lib/librte_ethdev.so.24.0 00:01:41.210 [573/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_ordered.c.o 00:01:41.210 [574/710] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:01:41.210 [575/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:01:41.210 [576/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_interleave.c.o 00:01:41.210 [577/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_model_ops.c.o 00:01:41.210 [578/710] Generating symbol file lib/librte_ethdev.so.24.0.p/librte_ethdev.so.24.0.symbols 00:01:41.210 [579/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:01:41.472 [580/710] Compiling C object lib/librte_pdcp.a.p/pdcp_pdcp_process.c.o 00:01:41.472 [581/710] Linking target lib/librte_metrics.so.24.0 00:01:41.472 [582/710] Linking target lib/librte_bpf.so.24.0 00:01:41.472 [583/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:01:41.472 [584/710] Compiling C object app/dpdk-test-dma-perf.p/test-dma-perf_benchmark.c.o 00:01:41.472 [585/710] Linking target lib/librte_eventdev.so.24.0 00:01:41.472 [586/710] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:01:41.472 [587/710] Linking target lib/librte_gro.so.24.0 00:01:41.472 [588/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:01:41.472 [589/710] Linking target lib/librte_gso.so.24.0 00:01:41.472 [590/710] Linking static target lib/librte_pdcp.a 00:01:41.472 [591/710] Linking target lib/librte_ip_frag.so.24.0 00:01:41.736 [592/710] Linking target lib/librte_pcapng.so.24.0 00:01:41.736 [593/710] Generating symbol file lib/librte_metrics.so.24.0.p/librte_metrics.so.24.0.symbols 00:01:41.736 [594/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_stats.c.o 00:01:41.736 [595/710] Linking target lib/librte_power.so.24.0 00:01:41.736 [596/710] Generating symbol file lib/librte_bpf.so.24.0.p/librte_bpf.so.24.0.symbols 00:01:41.736 [597/710] Generating symbol file lib/librte_eventdev.so.24.0.p/librte_eventdev.so.24.0.symbols 00:01:41.736 [598/710] Linking target lib/librte_bitratestats.so.24.0 00:01:41.736 [599/710] Linking target lib/librte_latencystats.so.24.0 00:01:41.736 [600/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:01:41.736 [601/710] Linking target lib/librte_dispatcher.so.24.0 00:01:41.736 [602/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_common.c.o 00:01:41.736 [603/710] Generating symbol file lib/librte_ip_frag.so.24.0.p/librte_ip_frag.so.24.0.symbols 00:01:42.000 [604/710] Generating symbol file lib/librte_pcapng.so.24.0.p/librte_pcapng.so.24.0.symbols 00:01:42.000 [605/710] Linking target lib/librte_port.so.24.0 00:01:42.000 [606/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:01:42.000 [607/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:01:42.000 [608/710] Linking target lib/librte_pdump.so.24.0 00:01:42.000 [609/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:01:42.000 [610/710] Linking target lib/librte_graph.so.24.0 00:01:42.000 [611/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:01:42.000 [612/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:01:42.263 [613/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:01:42.263 [614/710] Generating symbol file lib/librte_port.so.24.0.p/librte_port.so.24.0.symbols 00:01:42.263 [615/710] Generating lib/pdcp.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.263 [616/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:01:42.263 [617/710] Linking target lib/librte_pdcp.so.24.0 00:01:42.263 [618/710] Linking target lib/librte_table.so.24.0 00:01:42.263 [619/710] Generating symbol file lib/librte_graph.so.24.0.p/librte_graph.so.24.0.symbols 00:01:42.263 [620/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:01:42.264 [621/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:01:42.526 [622/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:01:42.526 [623/710] Generating symbol file lib/librte_table.so.24.0.p/librte_table.so.24.0.symbols 00:01:42.526 [624/710] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:01:42.526 [625/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:01:42.526 [626/710] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:01:42.526 [627/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_cman.c.o 00:01:42.785 [628/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:01:42.785 [629/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:01:43.045 [630/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:01:43.045 [631/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:01:43.045 [632/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:01:43.304 [633/710] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:01:43.304 [634/710] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:01:43.304 [635/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:01:43.304 [636/710] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:01:43.304 [637/710] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:01:43.304 [638/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:01:43.563 [639/710] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:01:43.563 [640/710] Compiling C object app/dpdk-testpmd.p/test-pmd_recycle_mbufs.c.o 00:01:43.563 [641/710] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:01:43.563 [642/710] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:01:43.563 [643/710] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:01:43.822 [644/710] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:01:43.822 [645/710] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:01:43.822 [646/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:01:43.822 [647/710] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:01:43.822 [648/710] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:01:44.080 [649/710] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:01:44.080 [650/710] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:01:44.080 [651/710] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:01:44.080 [652/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:01:44.366 [653/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:01:44.366 [654/710] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:01:44.366 [655/710] Linking static target drivers/libtmp_rte_net_i40e.a 00:01:44.366 [656/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:01:44.626 [657/710] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:01:44.626 [658/710] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:01:44.626 [659/710] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:01:44.885 [660/710] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:01:44.885 [661/710] Compiling C object drivers/librte_net_i40e.so.24.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:44.885 [662/710] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:01:44.885 [663/710] Linking static target drivers/librte_net_i40e.a 00:01:44.885 [664/710] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:01:45.143 [665/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:01:45.143 [666/710] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:01:45.401 [667/710] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.401 [668/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:01:45.659 [669/710] Linking target drivers/librte_net_i40e.so.24.0 00:01:45.659 [670/710] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:01:45.918 [671/710] Compiling C object lib/librte_node.a.p/node_ip6_rewrite.c.o 00:01:46.176 [672/710] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:01:46.176 [673/710] Linking static target lib/librte_node.a 00:01:46.434 [674/710] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.434 [675/710] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:01:46.692 [676/710] Linking target lib/librte_node.so.24.0 00:01:48.067 [677/710] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:01:48.067 [678/710] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:01:48.067 [679/710] Compiling C object app/dpdk-test-mldev.p/test-mldev_test_inference_common.c.o 00:01:49.437 [680/710] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:01:50.001 [681/710] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:01:56.547 [682/710] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:28.603 [683/710] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:28.603 [684/710] Linking static target lib/librte_vhost.a 00:02:28.603 [685/710] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.603 [686/710] Linking target lib/librte_vhost.so.24.0 00:02:50.594 [687/710] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:50.594 [688/710] Linking static target lib/librte_pipeline.a 00:02:50.594 [689/710] Linking target app/dpdk-proc-info 00:02:50.594 [690/710] Linking target app/dpdk-test-acl 00:02:50.594 [691/710] Linking target app/dpdk-test-cmdline 00:02:50.594 [692/710] Linking target app/dpdk-dumpcap 00:02:50.594 [693/710] Linking target app/dpdk-graph 00:02:50.594 [694/710] Linking target app/dpdk-test-flow-perf 00:02:50.594 [695/710] Linking target app/dpdk-pdump 00:02:50.594 [696/710] Linking target app/dpdk-test-mldev 00:02:50.594 [697/710] Linking target app/dpdk-test-compress-perf 00:02:50.594 [698/710] Linking target app/dpdk-test-gpudev 00:02:50.594 [699/710] Linking target app/dpdk-test-regex 00:02:50.594 [700/710] Linking target app/dpdk-test-fib 00:02:50.594 [701/710] Linking target app/dpdk-test-dma-perf 00:02:50.594 [702/710] Linking target app/dpdk-test-pipeline 00:02:50.594 [703/710] Linking target app/dpdk-test-sad 00:02:50.594 [704/710] Linking target app/dpdk-test-security-perf 00:02:50.594 [705/710] Linking target app/dpdk-test-bbdev 00:02:50.594 [706/710] Linking target app/dpdk-test-eventdev 00:02:50.594 [707/710] Linking target app/dpdk-test-crypto-perf 00:02:50.594 [708/710] Linking target app/dpdk-testpmd 00:02:50.594 [709/710] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:50.594 [710/710] Linking target lib/librte_pipeline.so.24.0 00:02:50.594 21:09:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ uname -s 00:02:50.594 21:09:24 build_native_dpdk -- common/autobuild_common.sh@188 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:50.594 21:09:24 build_native_dpdk -- common/autobuild_common.sh@201 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j48 install 00:02:50.594 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:50.594 [0/1] Installing files. 00:02:50.594 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.594 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:50.595 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-macsec/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-macsec 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:50.596 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_node 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/efd_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/efd_server 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.597 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.598 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ipsec_sa.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/commands.list to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:50.599 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:50.599 Installing lib/librte_log.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.599 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_dispatcher.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_mldev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.600 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_pdcp.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing lib/librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing drivers/librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.863 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing drivers/librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.863 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing drivers/librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.863 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:50.863 Installing drivers/librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0 00:02:50.863 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-graph to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-dma-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-mldev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/log/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.863 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lock_annotations.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_stdatomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_dtls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_pdcp_hdr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.864 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_dma_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dispatcher/rte_dispatcher.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.865 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mldev/rte_mldev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdcp/rte_pdcp_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_mcore_dispatch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_model_rtc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip6_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_udp4_input_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.866 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/dpdk-cmdline-gen.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-rss-flows.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:50.867 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:50.867 Installing symlink pointing to librte_log.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so.24 00:02:50.867 Installing symlink pointing to librte_log.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_log.so 00:02:50.867 Installing symlink pointing to librte_kvargs.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.24 00:02:50.867 Installing symlink pointing to librte_kvargs.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:50.867 Installing symlink pointing to librte_telemetry.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.24 00:02:50.867 Installing symlink pointing to librte_telemetry.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:50.867 Installing symlink pointing to librte_eal.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.24 00:02:50.867 Installing symlink pointing to librte_eal.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:50.867 Installing symlink pointing to librte_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.24 00:02:50.867 Installing symlink pointing to librte_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:50.867 Installing symlink pointing to librte_rcu.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.24 00:02:50.867 Installing symlink pointing to librte_rcu.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:50.867 Installing symlink pointing to librte_mempool.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.24 00:02:50.867 Installing symlink pointing to librte_mempool.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:50.867 Installing symlink pointing to librte_mbuf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.24 00:02:50.867 Installing symlink pointing to librte_mbuf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:50.867 Installing symlink pointing to librte_net.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.24 00:02:50.867 Installing symlink pointing to librte_net.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:50.867 Installing symlink pointing to librte_meter.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.24 00:02:50.867 Installing symlink pointing to librte_meter.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:50.867 Installing symlink pointing to librte_ethdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.24 00:02:50.867 Installing symlink pointing to librte_ethdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:50.867 Installing symlink pointing to librte_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.24 00:02:50.867 Installing symlink pointing to librte_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:50.867 Installing symlink pointing to librte_cmdline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.24 00:02:50.867 Installing symlink pointing to librte_cmdline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:50.867 Installing symlink pointing to librte_metrics.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.24 00:02:50.867 Installing symlink pointing to librte_metrics.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:50.867 Installing symlink pointing to librte_hash.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.24 00:02:50.867 Installing symlink pointing to librte_hash.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:50.867 Installing symlink pointing to librte_timer.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.24 00:02:50.867 Installing symlink pointing to librte_timer.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:50.867 Installing symlink pointing to librte_acl.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.24 00:02:50.867 Installing symlink pointing to librte_acl.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:50.867 Installing symlink pointing to librte_bbdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.24 00:02:50.867 Installing symlink pointing to librte_bbdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:50.867 Installing symlink pointing to librte_bitratestats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.24 00:02:50.867 Installing symlink pointing to librte_bitratestats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:50.867 Installing symlink pointing to librte_bpf.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.24 00:02:50.867 Installing symlink pointing to librte_bpf.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:50.867 Installing symlink pointing to librte_cfgfile.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.24 00:02:50.867 Installing symlink pointing to librte_cfgfile.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:50.867 Installing symlink pointing to librte_compressdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.24 00:02:50.867 Installing symlink pointing to librte_compressdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:50.867 Installing symlink pointing to librte_cryptodev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.24 00:02:50.867 Installing symlink pointing to librte_cryptodev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:50.867 Installing symlink pointing to librte_distributor.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.24 00:02:50.867 Installing symlink pointing to librte_distributor.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:50.867 Installing symlink pointing to librte_dmadev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.24 00:02:50.867 Installing symlink pointing to librte_dmadev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:50.867 Installing symlink pointing to librte_efd.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.24 00:02:50.867 Installing symlink pointing to librte_efd.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:50.867 Installing symlink pointing to librte_eventdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.24 00:02:50.867 Installing symlink pointing to librte_eventdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:50.867 Installing symlink pointing to librte_dispatcher.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so.24 00:02:50.867 Installing symlink pointing to librte_dispatcher.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dispatcher.so 00:02:50.867 Installing symlink pointing to librte_gpudev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.24 00:02:50.867 Installing symlink pointing to librte_gpudev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:50.867 Installing symlink pointing to librte_gro.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.24 00:02:50.867 Installing symlink pointing to librte_gro.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:50.867 Installing symlink pointing to librte_gso.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.24 00:02:50.867 Installing symlink pointing to librte_gso.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:50.867 Installing symlink pointing to librte_ip_frag.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.24 00:02:50.867 Installing symlink pointing to librte_ip_frag.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:50.867 Installing symlink pointing to librte_jobstats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.24 00:02:50.867 Installing symlink pointing to librte_jobstats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:50.867 Installing symlink pointing to librte_latencystats.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.24 00:02:50.867 Installing symlink pointing to librte_latencystats.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:50.867 Installing symlink pointing to librte_lpm.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.24 00:02:50.867 Installing symlink pointing to librte_lpm.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:50.867 Installing symlink pointing to librte_member.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.24 00:02:50.868 Installing symlink pointing to librte_member.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:50.868 Installing symlink pointing to librte_pcapng.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.24 00:02:50.868 Installing symlink pointing to librte_pcapng.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:50.868 Installing symlink pointing to librte_power.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.24 00:02:50.868 Installing symlink pointing to librte_power.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:50.868 './librte_bus_pci.so' -> 'dpdk/pmds-24.0/librte_bus_pci.so' 00:02:50.868 './librte_bus_pci.so.24' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24' 00:02:50.868 './librte_bus_pci.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_pci.so.24.0' 00:02:50.868 './librte_bus_vdev.so' -> 'dpdk/pmds-24.0/librte_bus_vdev.so' 00:02:50.868 './librte_bus_vdev.so.24' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24' 00:02:50.868 './librte_bus_vdev.so.24.0' -> 'dpdk/pmds-24.0/librte_bus_vdev.so.24.0' 00:02:50.868 './librte_mempool_ring.so' -> 'dpdk/pmds-24.0/librte_mempool_ring.so' 00:02:50.868 './librte_mempool_ring.so.24' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24' 00:02:50.868 './librte_mempool_ring.so.24.0' -> 'dpdk/pmds-24.0/librte_mempool_ring.so.24.0' 00:02:50.868 './librte_net_i40e.so' -> 'dpdk/pmds-24.0/librte_net_i40e.so' 00:02:50.868 './librte_net_i40e.so.24' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24' 00:02:50.868 './librte_net_i40e.so.24.0' -> 'dpdk/pmds-24.0/librte_net_i40e.so.24.0' 00:02:50.868 Installing symlink pointing to librte_rawdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.24 00:02:50.868 Installing symlink pointing to librte_rawdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:50.868 Installing symlink pointing to librte_regexdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.24 00:02:50.868 Installing symlink pointing to librte_regexdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:50.868 Installing symlink pointing to librte_mldev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so.24 00:02:50.868 Installing symlink pointing to librte_mldev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mldev.so 00:02:50.868 Installing symlink pointing to librte_rib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.24 00:02:50.868 Installing symlink pointing to librte_rib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:50.868 Installing symlink pointing to librte_reorder.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.24 00:02:50.868 Installing symlink pointing to librte_reorder.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:50.868 Installing symlink pointing to librte_sched.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.24 00:02:50.868 Installing symlink pointing to librte_sched.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:50.868 Installing symlink pointing to librte_security.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.24 00:02:50.868 Installing symlink pointing to librte_security.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:50.868 Installing symlink pointing to librte_stack.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.24 00:02:50.868 Installing symlink pointing to librte_stack.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:50.868 Installing symlink pointing to librte_vhost.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.24 00:02:50.868 Installing symlink pointing to librte_vhost.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:50.868 Installing symlink pointing to librte_ipsec.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.24 00:02:50.868 Installing symlink pointing to librte_ipsec.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:50.868 Installing symlink pointing to librte_pdcp.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so.24 00:02:50.868 Installing symlink pointing to librte_pdcp.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdcp.so 00:02:50.868 Installing symlink pointing to librte_fib.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.24 00:02:50.868 Installing symlink pointing to librte_fib.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:50.868 Installing symlink pointing to librte_port.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.24 00:02:50.868 Installing symlink pointing to librte_port.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:50.868 Installing symlink pointing to librte_pdump.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.24 00:02:50.868 Installing symlink pointing to librte_pdump.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:50.868 Installing symlink pointing to librte_table.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.24 00:02:50.868 Installing symlink pointing to librte_table.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:50.868 Installing symlink pointing to librte_pipeline.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.24 00:02:50.868 Installing symlink pointing to librte_pipeline.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:50.868 Installing symlink pointing to librte_graph.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.24 00:02:50.868 Installing symlink pointing to librte_graph.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:50.868 Installing symlink pointing to librte_node.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.24 00:02:50.868 Installing symlink pointing to librte_node.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:50.868 Installing symlink pointing to librte_bus_pci.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24 00:02:50.868 Installing symlink pointing to librte_bus_pci.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:02:50.868 Installing symlink pointing to librte_bus_vdev.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24 00:02:50.868 Installing symlink pointing to librte_bus_vdev.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:02:50.868 Installing symlink pointing to librte_mempool_ring.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24 00:02:50.868 Installing symlink pointing to librte_mempool_ring.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:02:50.868 Installing symlink pointing to librte_net_i40e.so.24.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24 00:02:50.868 Installing symlink pointing to librte_net_i40e.so.24 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:02:50.868 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-24.0' 00:02:50.868 21:09:25 build_native_dpdk -- common/autobuild_common.sh@207 -- $ cat 00:02:50.868 21:09:25 build_native_dpdk -- common/autobuild_common.sh@212 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:50.868 00:02:50.868 real 1m34.586s 00:02:50.868 user 18m6.229s 00:02:50.868 sys 2m6.444s 00:02:50.868 21:09:25 build_native_dpdk -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:50.868 21:09:25 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:50.868 ************************************ 00:02:50.868 END TEST build_native_dpdk 00:02:50.868 ************************************ 00:02:50.868 21:09:25 -- common/autotest_common.sh@1142 -- $ return 0 00:02:50.868 21:09:25 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:50.868 21:09:25 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:50.868 21:09:25 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:50.868 21:09:25 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:50.868 21:09:25 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:50.868 21:09:25 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:50.868 21:09:25 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:50.868 21:09:25 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:51.128 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:51.128 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:51.128 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:51.128 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:51.386 Using 'verbs' RDMA provider 00:03:01.922 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:11.890 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:11.890 Creating mk/config.mk...done. 00:03:11.890 Creating mk/cc.flags.mk...done. 00:03:11.890 Type 'make' to build. 00:03:11.890 21:09:45 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:03:11.890 21:09:45 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:03:11.890 21:09:45 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:03:11.890 21:09:45 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.890 ************************************ 00:03:11.890 START TEST make 00:03:11.890 ************************************ 00:03:11.890 21:09:45 make -- common/autotest_common.sh@1123 -- $ make -j48 00:03:11.890 make[1]: Nothing to be done for 'all'. 00:03:12.152 The Meson build system 00:03:12.152 Version: 1.3.1 00:03:12.152 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:12.152 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:12.152 Build type: native build 00:03:12.152 Project name: libvfio-user 00:03:12.152 Project version: 0.0.1 00:03:12.152 C compiler for the host machine: gcc (gcc 13.2.1 "gcc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:03:12.152 C linker for the host machine: gcc ld.bfd 2.39-16 00:03:12.152 Host machine cpu family: x86_64 00:03:12.152 Host machine cpu: x86_64 00:03:12.152 Run-time dependency threads found: YES 00:03:12.152 Library dl found: YES 00:03:12.152 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:03:12.152 Run-time dependency json-c found: YES 0.17 00:03:12.152 Run-time dependency cmocka found: YES 1.1.7 00:03:12.152 Program pytest-3 found: NO 00:03:12.152 Program flake8 found: NO 00:03:12.152 Program misspell-fixer found: NO 00:03:12.152 Program restructuredtext-lint found: NO 00:03:12.152 Program valgrind found: YES (/usr/bin/valgrind) 00:03:12.152 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:12.152 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:12.152 Compiler for C supports arguments -Wwrite-strings: YES 00:03:12.152 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.152 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:12.152 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:12.152 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:12.152 Build targets in project: 8 00:03:12.152 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:12.152 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:12.152 00:03:12.152 libvfio-user 0.0.1 00:03:12.152 00:03:12.152 User defined options 00:03:12.152 buildtype : debug 00:03:12.152 default_library: shared 00:03:12.152 libdir : /usr/local/lib 00:03:12.152 00:03:12.152 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.109 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:13.109 [1/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:13.371 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:13.371 [3/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:13.371 [4/37] Compiling C object samples/null.p/null.c.o 00:03:13.371 [5/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:13.371 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:13.371 [7/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:13.371 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:13.371 [9/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:13.371 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:13.371 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:13.371 [12/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:13.371 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:13.371 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:13.371 [15/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:13.371 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:13.371 [17/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:13.371 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:13.371 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:13.371 [20/37] Compiling C object samples/client.p/client.c.o 00:03:13.371 [21/37] Compiling C object samples/server.p/server.c.o 00:03:13.371 [22/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:13.371 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:13.371 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:13.371 [25/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:13.371 [26/37] Linking target samples/client 00:03:13.634 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:13.634 [28/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:13.634 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:03:13.634 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:13.634 [31/37] Linking target test/unit_tests 00:03:13.897 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:13.897 [33/37] Linking target samples/server 00:03:13.897 [34/37] Linking target samples/gpio-pci-idio-16 00:03:13.897 [35/37] Linking target samples/lspci 00:03:13.897 [36/37] Linking target samples/null 00:03:13.897 [37/37] Linking target samples/shadow_ioeventfd_server 00:03:13.897 INFO: autodetecting backend as ninja 00:03:13.897 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.897 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.474 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.474 ninja: no work to do. 00:03:26.707 CC lib/ut_mock/mock.o 00:03:26.707 CC lib/log/log.o 00:03:26.707 CC lib/log/log_flags.o 00:03:26.707 CC lib/log/log_deprecated.o 00:03:26.707 CC lib/ut/ut.o 00:03:26.707 LIB libspdk_log.a 00:03:26.707 LIB libspdk_ut.a 00:03:26.707 LIB libspdk_ut_mock.a 00:03:26.707 SO libspdk_ut_mock.so.6.0 00:03:26.707 SO libspdk_log.so.7.0 00:03:26.707 SO libspdk_ut.so.2.0 00:03:26.707 SYMLINK libspdk_ut_mock.so 00:03:26.708 SYMLINK libspdk_ut.so 00:03:26.708 SYMLINK libspdk_log.so 00:03:26.708 CC lib/ioat/ioat.o 00:03:26.708 CC lib/util/base64.o 00:03:26.708 CC lib/dma/dma.o 00:03:26.708 CXX lib/trace_parser/trace.o 00:03:26.708 CC lib/util/bit_array.o 00:03:26.708 CC lib/util/cpuset.o 00:03:26.708 CC lib/util/crc16.o 00:03:26.708 CC lib/util/crc32.o 00:03:26.708 CC lib/util/crc32c.o 00:03:26.708 CC lib/util/crc32_ieee.o 00:03:26.708 CC lib/util/crc64.o 00:03:26.708 CC lib/util/dif.o 00:03:26.708 CC lib/util/fd.o 00:03:26.708 CC lib/util/file.o 00:03:26.708 CC lib/util/hexlify.o 00:03:26.708 CC lib/util/iov.o 00:03:26.708 CC lib/util/math.o 00:03:26.708 CC lib/util/pipe.o 00:03:26.708 CC lib/util/strerror_tls.o 00:03:26.708 CC lib/util/string.o 00:03:26.708 CC lib/util/uuid.o 00:03:26.708 CC lib/util/fd_group.o 00:03:26.708 CC lib/util/xor.o 00:03:26.708 CC lib/util/zipf.o 00:03:26.708 CC lib/vfio_user/host/vfio_user_pci.o 00:03:26.708 CC lib/vfio_user/host/vfio_user.o 00:03:26.965 LIB libspdk_dma.a 00:03:26.965 SO libspdk_dma.so.4.0 00:03:26.965 LIB libspdk_ioat.a 00:03:26.965 SYMLINK libspdk_dma.so 00:03:26.965 SO libspdk_ioat.so.7.0 00:03:27.222 LIB libspdk_vfio_user.a 00:03:27.222 SYMLINK libspdk_ioat.so 00:03:27.222 SO libspdk_vfio_user.so.5.0 00:03:27.222 SYMLINK libspdk_vfio_user.so 00:03:27.222 LIB libspdk_util.a 00:03:27.222 SO libspdk_util.so.9.1 00:03:27.479 SYMLINK libspdk_util.so 00:03:27.737 CC lib/json/json_parse.o 00:03:27.737 CC lib/vmd/vmd.o 00:03:27.737 CC lib/env_dpdk/env.o 00:03:27.737 CC lib/json/json_util.o 00:03:27.737 CC lib/rdma_provider/common.o 00:03:27.737 CC lib/vmd/led.o 00:03:27.737 CC lib/env_dpdk/memory.o 00:03:27.737 CC lib/json/json_write.o 00:03:27.737 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:27.737 CC lib/env_dpdk/pci.o 00:03:27.737 CC lib/env_dpdk/init.o 00:03:27.737 CC lib/env_dpdk/threads.o 00:03:27.737 CC lib/conf/conf.o 00:03:27.737 CC lib/idxd/idxd.o 00:03:27.737 CC lib/env_dpdk/pci_ioat.o 00:03:27.737 CC lib/idxd/idxd_user.o 00:03:27.737 CC lib/env_dpdk/pci_virtio.o 00:03:27.737 CC lib/idxd/idxd_kernel.o 00:03:27.737 CC lib/rdma_utils/rdma_utils.o 00:03:27.737 CC lib/env_dpdk/pci_vmd.o 00:03:27.737 CC lib/env_dpdk/pci_idxd.o 00:03:27.737 CC lib/env_dpdk/pci_event.o 00:03:27.737 CC lib/env_dpdk/sigbus_handler.o 00:03:27.737 CC lib/env_dpdk/pci_dpdk.o 00:03:27.737 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:27.737 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:27.737 LIB libspdk_trace_parser.a 00:03:27.737 SO libspdk_trace_parser.so.5.0 00:03:27.737 SYMLINK libspdk_trace_parser.so 00:03:27.994 LIB libspdk_conf.a 00:03:27.994 SO libspdk_conf.so.6.0 00:03:27.994 LIB libspdk_rdma_provider.a 00:03:27.994 LIB libspdk_json.a 00:03:27.994 SYMLINK libspdk_conf.so 00:03:27.994 SO libspdk_rdma_provider.so.6.0 00:03:27.994 SO libspdk_json.so.6.0 00:03:27.994 SYMLINK libspdk_rdma_provider.so 00:03:27.994 LIB libspdk_rdma_utils.a 00:03:27.994 SYMLINK libspdk_json.so 00:03:27.994 SO libspdk_rdma_utils.so.1.0 00:03:27.994 SYMLINK libspdk_rdma_utils.so 00:03:28.252 CC lib/jsonrpc/jsonrpc_server.o 00:03:28.252 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:28.252 CC lib/jsonrpc/jsonrpc_client.o 00:03:28.252 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:28.252 LIB libspdk_idxd.a 00:03:28.252 SO libspdk_idxd.so.12.0 00:03:28.509 SYMLINK libspdk_idxd.so 00:03:28.509 LIB libspdk_vmd.a 00:03:28.509 SO libspdk_vmd.so.6.0 00:03:28.509 SYMLINK libspdk_vmd.so 00:03:28.509 LIB libspdk_jsonrpc.a 00:03:28.509 SO libspdk_jsonrpc.so.6.0 00:03:28.509 SYMLINK libspdk_jsonrpc.so 00:03:28.766 CC lib/rpc/rpc.o 00:03:29.024 LIB libspdk_rpc.a 00:03:29.024 SO libspdk_rpc.so.6.0 00:03:29.024 SYMLINK libspdk_rpc.so 00:03:29.281 CC lib/keyring/keyring.o 00:03:29.281 CC lib/keyring/keyring_rpc.o 00:03:29.281 CC lib/notify/notify.o 00:03:29.281 CC lib/notify/notify_rpc.o 00:03:29.281 CC lib/trace/trace.o 00:03:29.281 CC lib/trace/trace_flags.o 00:03:29.281 CC lib/trace/trace_rpc.o 00:03:29.539 LIB libspdk_notify.a 00:03:29.539 SO libspdk_notify.so.6.0 00:03:29.539 LIB libspdk_keyring.a 00:03:29.539 SYMLINK libspdk_notify.so 00:03:29.539 LIB libspdk_trace.a 00:03:29.539 SO libspdk_keyring.so.1.0 00:03:29.539 SO libspdk_trace.so.10.0 00:03:29.539 SYMLINK libspdk_keyring.so 00:03:29.539 LIB libspdk_env_dpdk.a 00:03:29.539 SYMLINK libspdk_trace.so 00:03:29.539 SO libspdk_env_dpdk.so.14.1 00:03:29.797 CC lib/thread/thread.o 00:03:29.797 CC lib/thread/iobuf.o 00:03:29.797 CC lib/sock/sock.o 00:03:29.797 CC lib/sock/sock_rpc.o 00:03:29.797 SYMLINK libspdk_env_dpdk.so 00:03:30.055 LIB libspdk_sock.a 00:03:30.313 SO libspdk_sock.so.10.0 00:03:30.313 SYMLINK libspdk_sock.so 00:03:30.313 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:30.313 CC lib/nvme/nvme_ctrlr.o 00:03:30.313 CC lib/nvme/nvme_fabric.o 00:03:30.313 CC lib/nvme/nvme_ns_cmd.o 00:03:30.313 CC lib/nvme/nvme_ns.o 00:03:30.313 CC lib/nvme/nvme_pcie_common.o 00:03:30.313 CC lib/nvme/nvme_pcie.o 00:03:30.313 CC lib/nvme/nvme_qpair.o 00:03:30.313 CC lib/nvme/nvme.o 00:03:30.313 CC lib/nvme/nvme_quirks.o 00:03:30.313 CC lib/nvme/nvme_transport.o 00:03:30.313 CC lib/nvme/nvme_discovery.o 00:03:30.313 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:30.313 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:30.313 CC lib/nvme/nvme_tcp.o 00:03:30.313 CC lib/nvme/nvme_opal.o 00:03:30.313 CC lib/nvme/nvme_io_msg.o 00:03:30.313 CC lib/nvme/nvme_poll_group.o 00:03:30.313 CC lib/nvme/nvme_zns.o 00:03:30.313 CC lib/nvme/nvme_stubs.o 00:03:30.313 CC lib/nvme/nvme_auth.o 00:03:30.313 CC lib/nvme/nvme_cuse.o 00:03:30.313 CC lib/nvme/nvme_vfio_user.o 00:03:30.313 CC lib/nvme/nvme_rdma.o 00:03:31.247 LIB libspdk_thread.a 00:03:31.247 SO libspdk_thread.so.10.1 00:03:31.505 SYMLINK libspdk_thread.so 00:03:31.505 CC lib/blob/blobstore.o 00:03:31.505 CC lib/init/json_config.o 00:03:31.505 CC lib/accel/accel.o 00:03:31.505 CC lib/init/subsystem.o 00:03:31.505 CC lib/accel/accel_rpc.o 00:03:31.505 CC lib/blob/request.o 00:03:31.505 CC lib/virtio/virtio.o 00:03:31.505 CC lib/init/subsystem_rpc.o 00:03:31.505 CC lib/accel/accel_sw.o 00:03:31.505 CC lib/vfu_tgt/tgt_endpoint.o 00:03:31.505 CC lib/blob/zeroes.o 00:03:31.505 CC lib/init/rpc.o 00:03:31.505 CC lib/virtio/virtio_vhost_user.o 00:03:31.505 CC lib/vfu_tgt/tgt_rpc.o 00:03:31.505 CC lib/virtio/virtio_vfio_user.o 00:03:31.505 CC lib/blob/blob_bs_dev.o 00:03:31.505 CC lib/virtio/virtio_pci.o 00:03:31.762 LIB libspdk_init.a 00:03:31.762 SO libspdk_init.so.5.0 00:03:32.020 LIB libspdk_vfu_tgt.a 00:03:32.020 SYMLINK libspdk_init.so 00:03:32.020 LIB libspdk_virtio.a 00:03:32.020 SO libspdk_vfu_tgt.so.3.0 00:03:32.020 SO libspdk_virtio.so.7.0 00:03:32.020 SYMLINK libspdk_vfu_tgt.so 00:03:32.020 SYMLINK libspdk_virtio.so 00:03:32.020 CC lib/event/app.o 00:03:32.020 CC lib/event/reactor.o 00:03:32.020 CC lib/event/log_rpc.o 00:03:32.020 CC lib/event/app_rpc.o 00:03:32.020 CC lib/event/scheduler_static.o 00:03:32.585 LIB libspdk_event.a 00:03:32.585 SO libspdk_event.so.14.0 00:03:32.585 LIB libspdk_accel.a 00:03:32.585 SYMLINK libspdk_event.so 00:03:32.585 SO libspdk_accel.so.15.1 00:03:32.585 SYMLINK libspdk_accel.so 00:03:32.843 CC lib/bdev/bdev.o 00:03:32.843 CC lib/bdev/bdev_rpc.o 00:03:32.843 CC lib/bdev/bdev_zone.o 00:03:32.843 CC lib/bdev/part.o 00:03:32.843 CC lib/bdev/scsi_nvme.o 00:03:33.101 LIB libspdk_nvme.a 00:03:33.101 SO libspdk_nvme.so.13.1 00:03:33.359 SYMLINK libspdk_nvme.so 00:03:34.735 LIB libspdk_blob.a 00:03:34.735 SO libspdk_blob.so.11.0 00:03:34.735 SYMLINK libspdk_blob.so 00:03:34.993 CC lib/blobfs/blobfs.o 00:03:34.993 CC lib/lvol/lvol.o 00:03:34.993 CC lib/blobfs/tree.o 00:03:35.559 LIB libspdk_bdev.a 00:03:35.559 SO libspdk_bdev.so.15.1 00:03:35.559 SYMLINK libspdk_bdev.so 00:03:35.825 LIB libspdk_blobfs.a 00:03:35.825 CC lib/ftl/ftl_core.o 00:03:35.825 CC lib/scsi/dev.o 00:03:35.825 CC lib/nvmf/ctrlr.o 00:03:35.825 CC lib/ftl/ftl_init.o 00:03:35.825 CC lib/scsi/lun.o 00:03:35.825 CC lib/ftl/ftl_layout.o 00:03:35.825 CC lib/nvmf/ctrlr_discovery.o 00:03:35.825 CC lib/nvmf/ctrlr_bdev.o 00:03:35.825 CC lib/scsi/port.o 00:03:35.825 CC lib/ftl/ftl_debug.o 00:03:35.825 CC lib/nvmf/subsystem.o 00:03:35.825 CC lib/ublk/ublk.o 00:03:35.825 CC lib/nbd/nbd.o 00:03:35.825 CC lib/scsi/scsi.o 00:03:35.825 CC lib/ftl/ftl_io.o 00:03:35.825 CC lib/ublk/ublk_rpc.o 00:03:35.825 CC lib/ftl/ftl_sb.o 00:03:35.825 CC lib/scsi/scsi_bdev.o 00:03:35.825 CC lib/nvmf/nvmf.o 00:03:35.825 CC lib/nbd/nbd_rpc.o 00:03:35.825 CC lib/scsi/scsi_pr.o 00:03:35.825 CC lib/nvmf/nvmf_rpc.o 00:03:35.825 CC lib/nvmf/transport.o 00:03:35.825 CC lib/scsi/scsi_rpc.o 00:03:35.825 CC lib/ftl/ftl_l2p.o 00:03:35.825 CC lib/ftl/ftl_l2p_flat.o 00:03:35.825 CC lib/scsi/task.o 00:03:35.825 CC lib/nvmf/tcp.o 00:03:35.825 CC lib/nvmf/stubs.o 00:03:35.825 CC lib/nvmf/mdns_server.o 00:03:35.825 CC lib/ftl/ftl_nv_cache.o 00:03:35.825 CC lib/nvmf/vfio_user.o 00:03:35.825 CC lib/ftl/ftl_band.o 00:03:35.825 CC lib/ftl/ftl_band_ops.o 00:03:35.825 CC lib/nvmf/rdma.o 00:03:35.825 CC lib/ftl/ftl_writer.o 00:03:35.825 CC lib/nvmf/auth.o 00:03:35.825 CC lib/ftl/ftl_rq.o 00:03:35.825 CC lib/ftl/ftl_reloc.o 00:03:35.825 CC lib/ftl/ftl_l2p_cache.o 00:03:35.825 CC lib/ftl/ftl_p2l.o 00:03:35.825 CC lib/ftl/mngt/ftl_mngt.o 00:03:35.825 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:35.825 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:35.825 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:35.825 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:35.825 SO libspdk_blobfs.so.10.0 00:03:35.825 SYMLINK libspdk_blobfs.so 00:03:35.825 LIB libspdk_lvol.a 00:03:35.825 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:36.085 SO libspdk_lvol.so.10.0 00:03:36.086 SYMLINK libspdk_lvol.so 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:36.086 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:36.086 CC lib/ftl/utils/ftl_conf.o 00:03:36.086 CC lib/ftl/utils/ftl_md.o 00:03:36.086 CC lib/ftl/utils/ftl_mempool.o 00:03:36.086 CC lib/ftl/utils/ftl_bitmap.o 00:03:36.086 CC lib/ftl/utils/ftl_property.o 00:03:36.086 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:36.086 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:36.351 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:36.351 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:36.351 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:36.351 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:36.351 CC lib/ftl/base/ftl_base_dev.o 00:03:36.351 CC lib/ftl/base/ftl_base_bdev.o 00:03:36.351 CC lib/ftl/ftl_trace.o 00:03:36.613 LIB libspdk_nbd.a 00:03:36.613 SO libspdk_nbd.so.7.0 00:03:36.613 LIB libspdk_scsi.a 00:03:36.613 SYMLINK libspdk_nbd.so 00:03:36.613 SO libspdk_scsi.so.9.0 00:03:36.872 LIB libspdk_ublk.a 00:03:36.872 SO libspdk_ublk.so.3.0 00:03:36.872 SYMLINK libspdk_scsi.so 00:03:36.872 SYMLINK libspdk_ublk.so 00:03:36.872 CC lib/iscsi/conn.o 00:03:36.872 CC lib/iscsi/init_grp.o 00:03:36.872 CC lib/iscsi/iscsi.o 00:03:36.872 CC lib/iscsi/md5.o 00:03:36.872 CC lib/iscsi/param.o 00:03:36.872 CC lib/vhost/vhost.o 00:03:36.872 CC lib/iscsi/portal_grp.o 00:03:36.872 CC lib/vhost/vhost_rpc.o 00:03:36.872 CC lib/iscsi/tgt_node.o 00:03:36.872 CC lib/vhost/vhost_scsi.o 00:03:36.872 CC lib/iscsi/iscsi_subsystem.o 00:03:36.872 CC lib/vhost/vhost_blk.o 00:03:36.872 CC lib/iscsi/iscsi_rpc.o 00:03:36.872 CC lib/vhost/rte_vhost_user.o 00:03:36.872 CC lib/iscsi/task.o 00:03:37.131 LIB libspdk_ftl.a 00:03:37.390 SO libspdk_ftl.so.9.0 00:03:37.649 SYMLINK libspdk_ftl.so 00:03:38.215 LIB libspdk_vhost.a 00:03:38.215 LIB libspdk_nvmf.a 00:03:38.215 SO libspdk_vhost.so.8.0 00:03:38.215 SO libspdk_nvmf.so.18.1 00:03:38.215 SYMLINK libspdk_vhost.so 00:03:38.473 LIB libspdk_iscsi.a 00:03:38.473 SO libspdk_iscsi.so.8.0 00:03:38.473 SYMLINK libspdk_nvmf.so 00:03:38.731 SYMLINK libspdk_iscsi.so 00:03:38.989 CC module/vfu_device/vfu_virtio.o 00:03:38.989 CC module/env_dpdk/env_dpdk_rpc.o 00:03:38.989 CC module/vfu_device/vfu_virtio_blk.o 00:03:38.989 CC module/vfu_device/vfu_virtio_scsi.o 00:03:38.989 CC module/vfu_device/vfu_virtio_rpc.o 00:03:38.989 CC module/accel/ioat/accel_ioat.o 00:03:38.989 CC module/accel/ioat/accel_ioat_rpc.o 00:03:38.989 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:38.989 CC module/sock/posix/posix.o 00:03:38.989 CC module/scheduler/gscheduler/gscheduler.o 00:03:38.989 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:38.989 CC module/blob/bdev/blob_bdev.o 00:03:38.989 CC module/accel/iaa/accel_iaa.o 00:03:38.989 CC module/accel/error/accel_error.o 00:03:38.989 CC module/keyring/linux/keyring.o 00:03:38.989 CC module/keyring/file/keyring.o 00:03:38.989 CC module/accel/dsa/accel_dsa.o 00:03:38.989 CC module/accel/iaa/accel_iaa_rpc.o 00:03:38.989 CC module/keyring/linux/keyring_rpc.o 00:03:38.989 CC module/keyring/file/keyring_rpc.o 00:03:38.989 CC module/accel/error/accel_error_rpc.o 00:03:38.989 CC module/accel/dsa/accel_dsa_rpc.o 00:03:38.989 LIB libspdk_env_dpdk_rpc.a 00:03:38.989 SO libspdk_env_dpdk_rpc.so.6.0 00:03:39.304 SYMLINK libspdk_env_dpdk_rpc.so 00:03:39.304 LIB libspdk_keyring_file.a 00:03:39.304 LIB libspdk_keyring_linux.a 00:03:39.304 LIB libspdk_scheduler_dpdk_governor.a 00:03:39.304 LIB libspdk_scheduler_gscheduler.a 00:03:39.304 SO libspdk_keyring_file.so.1.0 00:03:39.304 SO libspdk_keyring_linux.so.1.0 00:03:39.304 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:39.304 SO libspdk_scheduler_gscheduler.so.4.0 00:03:39.304 LIB libspdk_accel_error.a 00:03:39.304 LIB libspdk_accel_ioat.a 00:03:39.304 LIB libspdk_scheduler_dynamic.a 00:03:39.304 LIB libspdk_accel_iaa.a 00:03:39.304 SO libspdk_accel_error.so.2.0 00:03:39.304 SYMLINK libspdk_keyring_file.so 00:03:39.304 SO libspdk_accel_ioat.so.6.0 00:03:39.304 SYMLINK libspdk_keyring_linux.so 00:03:39.304 SO libspdk_scheduler_dynamic.so.4.0 00:03:39.304 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:39.304 SYMLINK libspdk_scheduler_gscheduler.so 00:03:39.304 SO libspdk_accel_iaa.so.3.0 00:03:39.304 LIB libspdk_accel_dsa.a 00:03:39.304 SYMLINK libspdk_accel_error.so 00:03:39.304 LIB libspdk_blob_bdev.a 00:03:39.304 SYMLINK libspdk_accel_ioat.so 00:03:39.304 SYMLINK libspdk_scheduler_dynamic.so 00:03:39.304 SYMLINK libspdk_accel_iaa.so 00:03:39.304 SO libspdk_accel_dsa.so.5.0 00:03:39.304 SO libspdk_blob_bdev.so.11.0 00:03:39.304 SYMLINK libspdk_blob_bdev.so 00:03:39.304 SYMLINK libspdk_accel_dsa.so 00:03:39.563 LIB libspdk_vfu_device.a 00:03:39.563 SO libspdk_vfu_device.so.3.0 00:03:39.563 CC module/bdev/gpt/gpt.o 00:03:39.563 CC module/bdev/delay/vbdev_delay.o 00:03:39.563 CC module/bdev/gpt/vbdev_gpt.o 00:03:39.563 CC module/bdev/lvol/vbdev_lvol.o 00:03:39.563 CC module/bdev/raid/bdev_raid.o 00:03:39.563 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:39.563 CC module/bdev/malloc/bdev_malloc.o 00:03:39.563 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:39.563 CC module/bdev/nvme/bdev_nvme.o 00:03:39.563 CC module/bdev/raid/bdev_raid_rpc.o 00:03:39.563 CC module/bdev/split/vbdev_split.o 00:03:39.563 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:39.563 CC module/bdev/split/vbdev_split_rpc.o 00:03:39.563 CC module/blobfs/bdev/blobfs_bdev.o 00:03:39.563 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:39.563 CC module/bdev/error/vbdev_error.o 00:03:39.563 CC module/bdev/raid/bdev_raid_sb.o 00:03:39.563 CC module/bdev/nvme/nvme_rpc.o 00:03:39.563 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:39.563 CC module/bdev/null/bdev_null_rpc.o 00:03:39.563 CC module/bdev/raid/raid0.o 00:03:39.563 CC module/bdev/null/bdev_null.o 00:03:39.563 CC module/bdev/error/vbdev_error_rpc.o 00:03:39.563 CC module/bdev/raid/raid1.o 00:03:39.563 CC module/bdev/nvme/bdev_mdns_client.o 00:03:39.563 CC module/bdev/raid/concat.o 00:03:39.563 CC module/bdev/iscsi/bdev_iscsi.o 00:03:39.563 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:39.563 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:39.563 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:39.563 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:39.563 CC module/bdev/nvme/vbdev_opal.o 00:03:39.563 CC module/bdev/passthru/vbdev_passthru.o 00:03:39.563 CC module/bdev/aio/bdev_aio.o 00:03:39.563 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:39.563 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:39.563 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:39.563 CC module/bdev/aio/bdev_aio_rpc.o 00:03:39.563 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:39.563 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:39.563 CC module/bdev/ftl/bdev_ftl.o 00:03:39.563 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:39.824 SYMLINK libspdk_vfu_device.so 00:03:39.824 LIB libspdk_sock_posix.a 00:03:40.082 SO libspdk_sock_posix.so.6.0 00:03:40.082 LIB libspdk_blobfs_bdev.a 00:03:40.082 SO libspdk_blobfs_bdev.so.6.0 00:03:40.082 SYMLINK libspdk_sock_posix.so 00:03:40.082 LIB libspdk_bdev_split.a 00:03:40.082 SO libspdk_bdev_split.so.6.0 00:03:40.082 LIB libspdk_bdev_gpt.a 00:03:40.082 SYMLINK libspdk_blobfs_bdev.so 00:03:40.082 LIB libspdk_bdev_error.a 00:03:40.082 LIB libspdk_bdev_null.a 00:03:40.082 SO libspdk_bdev_gpt.so.6.0 00:03:40.082 SO libspdk_bdev_error.so.6.0 00:03:40.082 SYMLINK libspdk_bdev_split.so 00:03:40.082 SO libspdk_bdev_null.so.6.0 00:03:40.082 LIB libspdk_bdev_zone_block.a 00:03:40.082 LIB libspdk_bdev_passthru.a 00:03:40.082 LIB libspdk_bdev_iscsi.a 00:03:40.082 LIB libspdk_bdev_delay.a 00:03:40.082 LIB libspdk_bdev_ftl.a 00:03:40.082 SO libspdk_bdev_zone_block.so.6.0 00:03:40.082 SO libspdk_bdev_passthru.so.6.0 00:03:40.082 SO libspdk_bdev_iscsi.so.6.0 00:03:40.082 SYMLINK libspdk_bdev_error.so 00:03:40.082 SYMLINK libspdk_bdev_gpt.so 00:03:40.082 SO libspdk_bdev_delay.so.6.0 00:03:40.340 SYMLINK libspdk_bdev_null.so 00:03:40.340 SO libspdk_bdev_ftl.so.6.0 00:03:40.340 LIB libspdk_bdev_aio.a 00:03:40.340 SYMLINK libspdk_bdev_zone_block.so 00:03:40.340 SYMLINK libspdk_bdev_iscsi.so 00:03:40.340 SYMLINK libspdk_bdev_passthru.so 00:03:40.340 SO libspdk_bdev_aio.so.6.0 00:03:40.340 LIB libspdk_bdev_malloc.a 00:03:40.340 SYMLINK libspdk_bdev_delay.so 00:03:40.340 SYMLINK libspdk_bdev_ftl.so 00:03:40.340 SO libspdk_bdev_malloc.so.6.0 00:03:40.340 SYMLINK libspdk_bdev_aio.so 00:03:40.340 SYMLINK libspdk_bdev_malloc.so 00:03:40.340 LIB libspdk_bdev_virtio.a 00:03:40.340 LIB libspdk_bdev_lvol.a 00:03:40.340 SO libspdk_bdev_lvol.so.6.0 00:03:40.340 SO libspdk_bdev_virtio.so.6.0 00:03:40.340 SYMLINK libspdk_bdev_lvol.so 00:03:40.340 SYMLINK libspdk_bdev_virtio.so 00:03:40.906 LIB libspdk_bdev_raid.a 00:03:40.906 SO libspdk_bdev_raid.so.6.0 00:03:40.906 SYMLINK libspdk_bdev_raid.so 00:03:42.280 LIB libspdk_bdev_nvme.a 00:03:42.280 SO libspdk_bdev_nvme.so.7.0 00:03:42.280 SYMLINK libspdk_bdev_nvme.so 00:03:42.537 CC module/event/subsystems/vmd/vmd.o 00:03:42.537 CC module/event/subsystems/sock/sock.o 00:03:42.537 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:42.537 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:42.537 CC module/event/subsystems/iobuf/iobuf.o 00:03:42.537 CC module/event/subsystems/scheduler/scheduler.o 00:03:42.537 CC module/event/subsystems/keyring/keyring.o 00:03:42.537 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:42.537 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:42.537 LIB libspdk_event_keyring.a 00:03:42.796 LIB libspdk_event_vhost_blk.a 00:03:42.796 LIB libspdk_event_vfu_tgt.a 00:03:42.796 LIB libspdk_event_vmd.a 00:03:42.796 LIB libspdk_event_scheduler.a 00:03:42.796 LIB libspdk_event_sock.a 00:03:42.796 SO libspdk_event_keyring.so.1.0 00:03:42.796 LIB libspdk_event_iobuf.a 00:03:42.796 SO libspdk_event_vhost_blk.so.3.0 00:03:42.796 SO libspdk_event_vfu_tgt.so.3.0 00:03:42.796 SO libspdk_event_scheduler.so.4.0 00:03:42.796 SO libspdk_event_vmd.so.6.0 00:03:42.796 SO libspdk_event_sock.so.5.0 00:03:42.796 SO libspdk_event_iobuf.so.3.0 00:03:42.796 SYMLINK libspdk_event_keyring.so 00:03:42.796 SYMLINK libspdk_event_vhost_blk.so 00:03:42.796 SYMLINK libspdk_event_vfu_tgt.so 00:03:42.796 SYMLINK libspdk_event_scheduler.so 00:03:42.796 SYMLINK libspdk_event_sock.so 00:03:42.796 SYMLINK libspdk_event_vmd.so 00:03:42.796 SYMLINK libspdk_event_iobuf.so 00:03:43.054 CC module/event/subsystems/accel/accel.o 00:03:43.054 LIB libspdk_event_accel.a 00:03:43.054 SO libspdk_event_accel.so.6.0 00:03:43.312 SYMLINK libspdk_event_accel.so 00:03:43.312 CC module/event/subsystems/bdev/bdev.o 00:03:43.570 LIB libspdk_event_bdev.a 00:03:43.570 SO libspdk_event_bdev.so.6.0 00:03:43.570 SYMLINK libspdk_event_bdev.so 00:03:43.828 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:43.828 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:43.828 CC module/event/subsystems/ublk/ublk.o 00:03:43.828 CC module/event/subsystems/scsi/scsi.o 00:03:43.828 CC module/event/subsystems/nbd/nbd.o 00:03:43.828 LIB libspdk_event_ublk.a 00:03:43.828 LIB libspdk_event_nbd.a 00:03:43.828 LIB libspdk_event_scsi.a 00:03:43.828 SO libspdk_event_ublk.so.3.0 00:03:43.828 SO libspdk_event_nbd.so.6.0 00:03:43.828 SO libspdk_event_scsi.so.6.0 00:03:44.086 SYMLINK libspdk_event_ublk.so 00:03:44.086 SYMLINK libspdk_event_nbd.so 00:03:44.086 SYMLINK libspdk_event_scsi.so 00:03:44.086 LIB libspdk_event_nvmf.a 00:03:44.086 SO libspdk_event_nvmf.so.6.0 00:03:44.086 SYMLINK libspdk_event_nvmf.so 00:03:44.086 CC module/event/subsystems/iscsi/iscsi.o 00:03:44.086 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:44.344 LIB libspdk_event_vhost_scsi.a 00:03:44.344 SO libspdk_event_vhost_scsi.so.3.0 00:03:44.344 LIB libspdk_event_iscsi.a 00:03:44.344 SO libspdk_event_iscsi.so.6.0 00:03:44.344 SYMLINK libspdk_event_vhost_scsi.so 00:03:44.344 SYMLINK libspdk_event_iscsi.so 00:03:44.603 SO libspdk.so.6.0 00:03:44.603 SYMLINK libspdk.so 00:03:44.603 CC app/trace_record/trace_record.o 00:03:44.603 CXX app/trace/trace.o 00:03:44.603 CC app/spdk_lspci/spdk_lspci.o 00:03:44.603 CC app/spdk_nvme_perf/perf.o 00:03:44.603 CC app/spdk_top/spdk_top.o 00:03:44.603 CC app/spdk_nvme_identify/identify.o 00:03:44.603 CC test/rpc_client/rpc_client_test.o 00:03:44.867 TEST_HEADER include/spdk/accel.h 00:03:44.868 CC app/spdk_nvme_discover/discovery_aer.o 00:03:44.868 TEST_HEADER include/spdk/accel_module.h 00:03:44.868 TEST_HEADER include/spdk/assert.h 00:03:44.868 TEST_HEADER include/spdk/barrier.h 00:03:44.868 TEST_HEADER include/spdk/base64.h 00:03:44.868 TEST_HEADER include/spdk/bdev.h 00:03:44.868 TEST_HEADER include/spdk/bdev_module.h 00:03:44.868 TEST_HEADER include/spdk/bdev_zone.h 00:03:44.868 TEST_HEADER include/spdk/bit_array.h 00:03:44.868 TEST_HEADER include/spdk/bit_pool.h 00:03:44.868 TEST_HEADER include/spdk/blob_bdev.h 00:03:44.868 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:44.868 TEST_HEADER include/spdk/blobfs.h 00:03:44.868 TEST_HEADER include/spdk/blob.h 00:03:44.868 TEST_HEADER include/spdk/conf.h 00:03:44.868 TEST_HEADER include/spdk/config.h 00:03:44.868 TEST_HEADER include/spdk/cpuset.h 00:03:44.868 TEST_HEADER include/spdk/crc16.h 00:03:44.868 TEST_HEADER include/spdk/crc32.h 00:03:44.868 TEST_HEADER include/spdk/crc64.h 00:03:44.868 TEST_HEADER include/spdk/dif.h 00:03:44.868 TEST_HEADER include/spdk/dma.h 00:03:44.868 TEST_HEADER include/spdk/endian.h 00:03:44.868 TEST_HEADER include/spdk/env_dpdk.h 00:03:44.868 TEST_HEADER include/spdk/env.h 00:03:44.868 TEST_HEADER include/spdk/event.h 00:03:44.868 TEST_HEADER include/spdk/fd_group.h 00:03:44.868 TEST_HEADER include/spdk/fd.h 00:03:44.868 TEST_HEADER include/spdk/ftl.h 00:03:44.868 TEST_HEADER include/spdk/file.h 00:03:44.868 TEST_HEADER include/spdk/gpt_spec.h 00:03:44.868 TEST_HEADER include/spdk/hexlify.h 00:03:44.868 TEST_HEADER include/spdk/idxd.h 00:03:44.868 TEST_HEADER include/spdk/histogram_data.h 00:03:44.868 TEST_HEADER include/spdk/idxd_spec.h 00:03:44.868 TEST_HEADER include/spdk/init.h 00:03:44.868 TEST_HEADER include/spdk/ioat.h 00:03:44.868 TEST_HEADER include/spdk/iscsi_spec.h 00:03:44.868 TEST_HEADER include/spdk/ioat_spec.h 00:03:44.868 TEST_HEADER include/spdk/json.h 00:03:44.868 TEST_HEADER include/spdk/jsonrpc.h 00:03:44.868 TEST_HEADER include/spdk/keyring.h 00:03:44.868 TEST_HEADER include/spdk/likely.h 00:03:44.868 TEST_HEADER include/spdk/keyring_module.h 00:03:44.868 TEST_HEADER include/spdk/lvol.h 00:03:44.868 TEST_HEADER include/spdk/log.h 00:03:44.868 TEST_HEADER include/spdk/memory.h 00:03:44.868 TEST_HEADER include/spdk/mmio.h 00:03:44.868 TEST_HEADER include/spdk/nbd.h 00:03:44.868 TEST_HEADER include/spdk/notify.h 00:03:44.868 TEST_HEADER include/spdk/nvme.h 00:03:44.868 TEST_HEADER include/spdk/nvme_intel.h 00:03:44.868 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:44.868 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:44.868 TEST_HEADER include/spdk/nvme_spec.h 00:03:44.868 TEST_HEADER include/spdk/nvme_zns.h 00:03:44.868 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:44.868 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:44.868 TEST_HEADER include/spdk/nvmf.h 00:03:44.868 TEST_HEADER include/spdk/nvmf_spec.h 00:03:44.868 TEST_HEADER include/spdk/nvmf_transport.h 00:03:44.868 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:44.868 TEST_HEADER include/spdk/opal.h 00:03:44.868 TEST_HEADER include/spdk/opal_spec.h 00:03:44.868 TEST_HEADER include/spdk/pci_ids.h 00:03:44.868 TEST_HEADER include/spdk/pipe.h 00:03:44.868 TEST_HEADER include/spdk/queue.h 00:03:44.868 TEST_HEADER include/spdk/reduce.h 00:03:44.868 TEST_HEADER include/spdk/rpc.h 00:03:44.868 TEST_HEADER include/spdk/scsi.h 00:03:44.868 TEST_HEADER include/spdk/scheduler.h 00:03:44.868 TEST_HEADER include/spdk/scsi_spec.h 00:03:44.868 TEST_HEADER include/spdk/sock.h 00:03:44.868 TEST_HEADER include/spdk/stdinc.h 00:03:44.868 TEST_HEADER include/spdk/string.h 00:03:44.868 TEST_HEADER include/spdk/thread.h 00:03:44.868 TEST_HEADER include/spdk/trace.h 00:03:44.868 TEST_HEADER include/spdk/trace_parser.h 00:03:44.868 TEST_HEADER include/spdk/tree.h 00:03:44.868 TEST_HEADER include/spdk/ublk.h 00:03:44.868 TEST_HEADER include/spdk/util.h 00:03:44.868 CC app/spdk_dd/spdk_dd.o 00:03:44.868 TEST_HEADER include/spdk/uuid.h 00:03:44.868 TEST_HEADER include/spdk/version.h 00:03:44.868 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:44.868 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:44.868 TEST_HEADER include/spdk/vhost.h 00:03:44.868 TEST_HEADER include/spdk/vmd.h 00:03:44.868 TEST_HEADER include/spdk/xor.h 00:03:44.868 TEST_HEADER include/spdk/zipf.h 00:03:44.868 CXX test/cpp_headers/accel.o 00:03:44.868 CXX test/cpp_headers/accel_module.o 00:03:44.868 CXX test/cpp_headers/assert.o 00:03:44.868 CXX test/cpp_headers/barrier.o 00:03:44.868 CXX test/cpp_headers/base64.o 00:03:44.868 CXX test/cpp_headers/bdev.o 00:03:44.868 CXX test/cpp_headers/bdev_module.o 00:03:44.868 CC app/iscsi_tgt/iscsi_tgt.o 00:03:44.868 CXX test/cpp_headers/bdev_zone.o 00:03:44.868 CXX test/cpp_headers/bit_array.o 00:03:44.868 CXX test/cpp_headers/bit_pool.o 00:03:44.868 CXX test/cpp_headers/blob_bdev.o 00:03:44.868 CC app/nvmf_tgt/nvmf_main.o 00:03:44.868 CXX test/cpp_headers/blobfs_bdev.o 00:03:44.868 CXX test/cpp_headers/blobfs.o 00:03:44.868 CXX test/cpp_headers/blob.o 00:03:44.868 CXX test/cpp_headers/conf.o 00:03:44.868 CXX test/cpp_headers/config.o 00:03:44.868 CXX test/cpp_headers/cpuset.o 00:03:44.868 CXX test/cpp_headers/crc16.o 00:03:44.868 CC app/spdk_tgt/spdk_tgt.o 00:03:44.868 CXX test/cpp_headers/crc32.o 00:03:44.868 CC examples/util/zipf/zipf.o 00:03:44.868 CC test/thread/poller_perf/poller_perf.o 00:03:44.868 CC examples/ioat/perf/perf.o 00:03:44.868 CC examples/ioat/verify/verify.o 00:03:44.868 CC test/env/pci/pci_ut.o 00:03:44.868 CC test/env/vtophys/vtophys.o 00:03:44.868 CC app/fio/nvme/fio_plugin.o 00:03:44.868 CC test/app/histogram_perf/histogram_perf.o 00:03:44.868 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:44.868 CC test/env/memory/memory_ut.o 00:03:44.868 CC test/app/jsoncat/jsoncat.o 00:03:44.868 CC test/app/stub/stub.o 00:03:44.868 CC test/dma/test_dma/test_dma.o 00:03:44.868 CC app/fio/bdev/fio_plugin.o 00:03:44.868 CC test/app/bdev_svc/bdev_svc.o 00:03:45.131 LINK spdk_lspci 00:03:45.131 CC test/env/mem_callbacks/mem_callbacks.o 00:03:45.131 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:45.131 LINK spdk_nvme_discover 00:03:45.131 LINK rpc_client_test 00:03:45.131 LINK poller_perf 00:03:45.131 LINK vtophys 00:03:45.131 LINK interrupt_tgt 00:03:45.131 LINK nvmf_tgt 00:03:45.131 CXX test/cpp_headers/crc64.o 00:03:45.131 LINK jsoncat 00:03:45.131 CXX test/cpp_headers/dif.o 00:03:45.131 CXX test/cpp_headers/dma.o 00:03:45.131 CXX test/cpp_headers/endian.o 00:03:45.390 LINK env_dpdk_post_init 00:03:45.390 CXX test/cpp_headers/env_dpdk.o 00:03:45.390 LINK zipf 00:03:45.390 LINK histogram_perf 00:03:45.390 LINK iscsi_tgt 00:03:45.390 LINK spdk_trace_record 00:03:45.390 CXX test/cpp_headers/env.o 00:03:45.390 CXX test/cpp_headers/event.o 00:03:45.390 LINK stub 00:03:45.390 CXX test/cpp_headers/fd_group.o 00:03:45.390 CXX test/cpp_headers/fd.o 00:03:45.390 CXX test/cpp_headers/file.o 00:03:45.390 CXX test/cpp_headers/ftl.o 00:03:45.390 CXX test/cpp_headers/gpt_spec.o 00:03:45.390 CXX test/cpp_headers/hexlify.o 00:03:45.390 LINK spdk_tgt 00:03:45.390 CXX test/cpp_headers/histogram_data.o 00:03:45.391 CXX test/cpp_headers/idxd.o 00:03:45.391 LINK verify 00:03:45.391 CXX test/cpp_headers/idxd_spec.o 00:03:45.391 LINK ioat_perf 00:03:45.391 CXX test/cpp_headers/init.o 00:03:45.391 LINK bdev_svc 00:03:45.391 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:45.391 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:45.391 CXX test/cpp_headers/ioat.o 00:03:45.391 CXX test/cpp_headers/ioat_spec.o 00:03:45.391 CXX test/cpp_headers/iscsi_spec.o 00:03:45.655 CXX test/cpp_headers/json.o 00:03:45.656 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:45.656 LINK spdk_dd 00:03:45.656 CXX test/cpp_headers/jsonrpc.o 00:03:45.656 CXX test/cpp_headers/keyring.o 00:03:45.656 CXX test/cpp_headers/keyring_module.o 00:03:45.656 CXX test/cpp_headers/likely.o 00:03:45.656 LINK spdk_trace 00:03:45.656 CXX test/cpp_headers/log.o 00:03:45.656 LINK pci_ut 00:03:45.656 CXX test/cpp_headers/lvol.o 00:03:45.656 CXX test/cpp_headers/memory.o 00:03:45.656 CXX test/cpp_headers/mmio.o 00:03:45.656 CXX test/cpp_headers/nbd.o 00:03:45.656 CXX test/cpp_headers/notify.o 00:03:45.656 CXX test/cpp_headers/nvme.o 00:03:45.656 CXX test/cpp_headers/nvme_intel.o 00:03:45.656 CXX test/cpp_headers/nvme_ocssd.o 00:03:45.656 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:45.656 CXX test/cpp_headers/nvme_spec.o 00:03:45.656 CXX test/cpp_headers/nvme_zns.o 00:03:45.656 CXX test/cpp_headers/nvmf_cmd.o 00:03:45.656 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:45.656 CXX test/cpp_headers/nvmf.o 00:03:45.656 LINK test_dma 00:03:45.656 CXX test/cpp_headers/nvmf_spec.o 00:03:45.656 CXX test/cpp_headers/nvmf_transport.o 00:03:45.656 CXX test/cpp_headers/opal.o 00:03:45.919 CXX test/cpp_headers/opal_spec.o 00:03:45.919 CXX test/cpp_headers/pci_ids.o 00:03:45.919 CC test/event/event_perf/event_perf.o 00:03:45.919 CXX test/cpp_headers/pipe.o 00:03:45.919 LINK nvme_fuzz 00:03:45.919 CC test/event/reactor/reactor.o 00:03:45.919 CC examples/vmd/lsvmd/lsvmd.o 00:03:45.919 CC examples/vmd/led/led.o 00:03:45.919 CC examples/idxd/perf/perf.o 00:03:45.919 CC examples/sock/hello_world/hello_sock.o 00:03:45.919 CC test/event/reactor_perf/reactor_perf.o 00:03:45.919 CXX test/cpp_headers/queue.o 00:03:45.919 CXX test/cpp_headers/reduce.o 00:03:45.919 CXX test/cpp_headers/rpc.o 00:03:45.919 CXX test/cpp_headers/scheduler.o 00:03:45.919 LINK spdk_bdev 00:03:45.919 CC examples/thread/thread/thread_ex.o 00:03:45.919 CXX test/cpp_headers/scsi.o 00:03:45.919 CXX test/cpp_headers/scsi_spec.o 00:03:45.919 CXX test/cpp_headers/sock.o 00:03:46.179 CC test/event/app_repeat/app_repeat.o 00:03:46.179 LINK spdk_nvme 00:03:46.179 CXX test/cpp_headers/stdinc.o 00:03:46.179 CXX test/cpp_headers/string.o 00:03:46.179 CXX test/cpp_headers/thread.o 00:03:46.179 CC test/event/scheduler/scheduler.o 00:03:46.179 CXX test/cpp_headers/trace.o 00:03:46.179 CXX test/cpp_headers/trace_parser.o 00:03:46.179 CXX test/cpp_headers/tree.o 00:03:46.179 CXX test/cpp_headers/ublk.o 00:03:46.179 CXX test/cpp_headers/util.o 00:03:46.179 CXX test/cpp_headers/uuid.o 00:03:46.179 CXX test/cpp_headers/version.o 00:03:46.179 CXX test/cpp_headers/vfio_user_pci.o 00:03:46.179 CXX test/cpp_headers/vfio_user_spec.o 00:03:46.179 CXX test/cpp_headers/vhost.o 00:03:46.179 CXX test/cpp_headers/vmd.o 00:03:46.179 CXX test/cpp_headers/xor.o 00:03:46.179 LINK event_perf 00:03:46.179 CXX test/cpp_headers/zipf.o 00:03:46.179 LINK lsvmd 00:03:46.179 LINK spdk_nvme_perf 00:03:46.179 LINK reactor 00:03:46.179 LINK reactor_perf 00:03:46.179 CC app/vhost/vhost.o 00:03:46.179 LINK led 00:03:46.439 LINK mem_callbacks 00:03:46.439 LINK vhost_fuzz 00:03:46.439 LINK spdk_top 00:03:46.439 LINK app_repeat 00:03:46.439 LINK hello_sock 00:03:46.439 LINK spdk_nvme_identify 00:03:46.439 CC test/nvme/reset/reset.o 00:03:46.439 CC test/nvme/sgl/sgl.o 00:03:46.439 CC test/nvme/aer/aer.o 00:03:46.439 CC test/nvme/startup/startup.o 00:03:46.439 CC test/nvme/reserve/reserve.o 00:03:46.439 CC test/nvme/e2edp/nvme_dp.o 00:03:46.439 CC test/nvme/overhead/overhead.o 00:03:46.439 CC test/nvme/err_injection/err_injection.o 00:03:46.439 CC test/nvme/connect_stress/connect_stress.o 00:03:46.439 CC test/nvme/simple_copy/simple_copy.o 00:03:46.439 CC test/nvme/boot_partition/boot_partition.o 00:03:46.439 CC test/nvme/compliance/nvme_compliance.o 00:03:46.439 LINK thread 00:03:46.698 CC test/accel/dif/dif.o 00:03:46.698 LINK scheduler 00:03:46.698 CC test/blobfs/mkfs/mkfs.o 00:03:46.698 CC test/nvme/fused_ordering/fused_ordering.o 00:03:46.698 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:46.698 CC test/nvme/fdp/fdp.o 00:03:46.698 LINK idxd_perf 00:03:46.698 CC test/nvme/cuse/cuse.o 00:03:46.698 CC test/lvol/esnap/esnap.o 00:03:46.698 LINK vhost 00:03:46.698 LINK boot_partition 00:03:46.698 LINK reserve 00:03:46.698 LINK startup 00:03:46.957 LINK simple_copy 00:03:46.957 LINK fused_ordering 00:03:46.957 LINK sgl 00:03:46.957 LINK doorbell_aers 00:03:46.957 LINK err_injection 00:03:46.957 LINK connect_stress 00:03:46.957 LINK aer 00:03:46.957 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:46.957 CC examples/nvme/hello_world/hello_world.o 00:03:46.957 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:46.957 CC examples/nvme/abort/abort.o 00:03:46.957 CC examples/nvme/hotplug/hotplug.o 00:03:46.957 CC examples/nvme/arbitration/arbitration.o 00:03:46.957 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:46.957 CC examples/nvme/reconnect/reconnect.o 00:03:46.957 LINK mkfs 00:03:46.957 LINK nvme_dp 00:03:46.957 LINK reset 00:03:46.957 CC examples/accel/perf/accel_perf.o 00:03:46.957 LINK overhead 00:03:46.957 CC examples/blob/hello_world/hello_blob.o 00:03:46.957 CC examples/blob/cli/blobcli.o 00:03:47.216 LINK nvme_compliance 00:03:47.216 LINK memory_ut 00:03:47.216 LINK dif 00:03:47.216 LINK fdp 00:03:47.216 LINK hotplug 00:03:47.216 LINK hello_world 00:03:47.216 LINK cmb_copy 00:03:47.216 LINK pmr_persistence 00:03:47.216 LINK arbitration 00:03:47.474 LINK hello_blob 00:03:47.474 LINK abort 00:03:47.474 LINK reconnect 00:03:47.474 LINK nvme_manage 00:03:47.474 LINK accel_perf 00:03:47.474 CC test/bdev/bdevio/bdevio.o 00:03:47.731 LINK blobcli 00:03:47.731 LINK iscsi_fuzz 00:03:47.989 CC examples/bdev/hello_world/hello_bdev.o 00:03:47.989 CC examples/bdev/bdevperf/bdevperf.o 00:03:47.989 LINK bdevio 00:03:47.989 LINK cuse 00:03:48.246 LINK hello_bdev 00:03:48.504 LINK bdevperf 00:03:49.068 CC examples/nvmf/nvmf/nvmf.o 00:03:49.326 LINK nvmf 00:03:51.854 LINK esnap 00:03:51.854 00:03:51.854 real 0m41.384s 00:03:51.854 user 7m25.170s 00:03:51.854 sys 1m50.386s 00:03:51.854 21:10:26 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:51.854 21:10:26 make -- common/autotest_common.sh@10 -- $ set +x 00:03:51.854 ************************************ 00:03:51.854 END TEST make 00:03:51.854 ************************************ 00:03:51.854 21:10:26 -- common/autotest_common.sh@1142 -- $ return 0 00:03:51.854 21:10:26 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:51.854 21:10:26 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:51.854 21:10:26 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:51.854 21:10:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.854 21:10:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:51.854 21:10:26 -- pm/common@44 -- $ pid=664046 00:03:51.854 21:10:26 -- pm/common@50 -- $ kill -TERM 664046 00:03:51.854 21:10:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.854 21:10:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:51.854 21:10:26 -- pm/common@44 -- $ pid=664048 00:03:51.854 21:10:26 -- pm/common@50 -- $ kill -TERM 664048 00:03:51.854 21:10:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.854 21:10:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:51.854 21:10:26 -- pm/common@44 -- $ pid=664050 00:03:51.854 21:10:26 -- pm/common@50 -- $ kill -TERM 664050 00:03:51.854 21:10:26 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:51.854 21:10:26 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:51.854 21:10:26 -- pm/common@44 -- $ pid=664078 00:03:51.854 21:10:26 -- pm/common@50 -- $ sudo -E kill -TERM 664078 00:03:51.854 21:10:26 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:51.854 21:10:26 -- nvmf/common.sh@7 -- # uname -s 00:03:51.854 21:10:26 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:51.854 21:10:26 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:51.854 21:10:26 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:51.854 21:10:26 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:51.854 21:10:26 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:51.854 21:10:26 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:51.854 21:10:26 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:51.854 21:10:26 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:51.854 21:10:26 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:51.854 21:10:26 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:52.113 21:10:26 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:03:52.113 21:10:26 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:03:52.113 21:10:26 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.113 21:10:26 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:52.113 21:10:26 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:03:52.113 21:10:26 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.113 21:10:26 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:52.113 21:10:26 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.113 21:10:26 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.113 21:10:26 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.113 21:10:26 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.113 21:10:26 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.113 21:10:26 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.113 21:10:26 -- paths/export.sh@5 -- # export PATH 00:03:52.113 21:10:26 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.113 21:10:26 -- nvmf/common.sh@47 -- # : 0 00:03:52.113 21:10:26 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:03:52.113 21:10:26 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:03:52.113 21:10:26 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:52.113 21:10:26 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.113 21:10:26 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.113 21:10:26 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:03:52.113 21:10:26 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:03:52.113 21:10:26 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:03:52.113 21:10:26 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:52.113 21:10:26 -- spdk/autotest.sh@32 -- # uname -s 00:03:52.113 21:10:26 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:52.113 21:10:26 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:52.113 21:10:26 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:52.113 21:10:26 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:52.113 21:10:26 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:52.113 21:10:26 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:52.113 21:10:26 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:52.113 21:10:26 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:52.113 21:10:26 -- spdk/autotest.sh@48 -- # udevadm_pid=741462 00:03:52.113 21:10:26 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:52.113 21:10:26 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:52.113 21:10:26 -- pm/common@17 -- # local monitor 00:03:52.113 21:10:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.113 21:10:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.113 21:10:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.113 21:10:26 -- pm/common@21 -- # date +%s 00:03:52.113 21:10:26 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:52.113 21:10:26 -- pm/common@21 -- # date +%s 00:03:52.113 21:10:26 -- pm/common@25 -- # sleep 1 00:03:52.113 21:10:26 -- pm/common@21 -- # date +%s 00:03:52.113 21:10:26 -- pm/common@21 -- # date +%s 00:03:52.113 21:10:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720725026 00:03:52.113 21:10:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720725026 00:03:52.113 21:10:26 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720725026 00:03:52.113 21:10:26 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1720725026 00:03:52.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720725026_collect-vmstat.pm.log 00:03:52.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720725026_collect-cpu-load.pm.log 00:03:52.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720725026_collect-cpu-temp.pm.log 00:03:52.113 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1720725026_collect-bmc-pm.bmc.pm.log 00:03:53.047 21:10:27 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:53.047 21:10:27 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:53.047 21:10:27 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.047 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:03:53.047 21:10:27 -- spdk/autotest.sh@59 -- # create_test_list 00:03:53.047 21:10:27 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:53.047 21:10:27 -- common/autotest_common.sh@10 -- # set +x 00:03:53.047 21:10:27 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:53.047 21:10:27 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.047 21:10:27 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.047 21:10:27 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:53.047 21:10:27 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:53.047 21:10:27 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:53.047 21:10:27 -- common/autotest_common.sh@1455 -- # uname 00:03:53.047 21:10:27 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:03:53.047 21:10:27 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:53.047 21:10:27 -- common/autotest_common.sh@1475 -- # uname 00:03:53.047 21:10:27 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:03:53.047 21:10:27 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:53.047 21:10:27 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:03:53.047 21:10:27 -- spdk/autotest.sh@72 -- # hash lcov 00:03:53.047 21:10:27 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:03:53.047 21:10:27 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:03:53.047 --rc lcov_branch_coverage=1 00:03:53.047 --rc lcov_function_coverage=1 00:03:53.047 --rc genhtml_branch_coverage=1 00:03:53.047 --rc genhtml_function_coverage=1 00:03:53.047 --rc genhtml_legend=1 00:03:53.047 --rc geninfo_all_blocks=1 00:03:53.047 ' 00:03:53.047 21:10:27 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:03:53.047 --rc lcov_branch_coverage=1 00:03:53.047 --rc lcov_function_coverage=1 00:03:53.047 --rc genhtml_branch_coverage=1 00:03:53.047 --rc genhtml_function_coverage=1 00:03:53.047 --rc genhtml_legend=1 00:03:53.047 --rc geninfo_all_blocks=1 00:03:53.047 ' 00:03:53.047 21:10:27 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:03:53.047 --rc lcov_branch_coverage=1 00:03:53.047 --rc lcov_function_coverage=1 00:03:53.047 --rc genhtml_branch_coverage=1 00:03:53.047 --rc genhtml_function_coverage=1 00:03:53.047 --rc genhtml_legend=1 00:03:53.047 --rc geninfo_all_blocks=1 00:03:53.047 --no-external' 00:03:53.047 21:10:27 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:03:53.047 --rc lcov_branch_coverage=1 00:03:53.047 --rc lcov_function_coverage=1 00:03:53.047 --rc genhtml_branch_coverage=1 00:03:53.047 --rc genhtml_function_coverage=1 00:03:53.047 --rc genhtml_legend=1 00:03:53.047 --rc geninfo_all_blocks=1 00:03:53.047 --no-external' 00:03:53.047 21:10:27 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:03:53.047 lcov: LCOV version 1.14 00:03:53.047 21:10:27 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:03:59.636 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:03:59.636 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:03:59.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:03:59.637 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:03:59.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:03:59.638 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:04:21.556 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:21.556 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:28.116 21:11:01 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:04:28.116 21:11:01 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:28.116 21:11:01 -- common/autotest_common.sh@10 -- # set +x 00:04:28.116 21:11:01 -- spdk/autotest.sh@91 -- # rm -f 00:04:28.116 21:11:01 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:28.116 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:04:28.116 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:04:28.116 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:04:28.116 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:04:28.116 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:04:28.116 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:04:28.116 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:04:28.116 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:04:28.116 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:04:28.116 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:04:28.116 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:04:28.116 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:04:28.375 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:04:28.375 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:04:28.375 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:04:28.375 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:04:28.375 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:04:28.375 21:11:03 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:04:28.375 21:11:03 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:28.375 21:11:03 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:28.375 21:11:03 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:28.375 21:11:03 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:28.375 21:11:03 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:28.375 21:11:03 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:28.375 21:11:03 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:28.375 21:11:03 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:28.375 21:11:03 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:04:28.375 21:11:03 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:04:28.375 21:11:03 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:04:28.375 21:11:03 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:04:28.375 21:11:03 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:04:28.375 21:11:03 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:28.375 No valid GPT data, bailing 00:04:28.375 21:11:03 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:28.375 21:11:03 -- scripts/common.sh@391 -- # pt= 00:04:28.375 21:11:03 -- scripts/common.sh@392 -- # return 1 00:04:28.375 21:11:03 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:28.375 1+0 records in 00:04:28.375 1+0 records out 00:04:28.375 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00178557 s, 587 MB/s 00:04:28.375 21:11:03 -- spdk/autotest.sh@118 -- # sync 00:04:28.375 21:11:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:28.375 21:11:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:28.375 21:11:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:30.277 21:11:05 -- spdk/autotest.sh@124 -- # uname -s 00:04:30.536 21:11:05 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:04:30.536 21:11:05 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:30.536 21:11:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.536 21:11:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.536 21:11:05 -- common/autotest_common.sh@10 -- # set +x 00:04:30.536 ************************************ 00:04:30.536 START TEST setup.sh 00:04:30.536 ************************************ 00:04:30.536 21:11:05 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:04:30.536 * Looking for test storage... 00:04:30.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:30.536 21:11:05 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:04:30.536 21:11:05 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:04:30.536 21:11:05 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:30.536 21:11:05 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:30.536 21:11:05 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:30.536 21:11:05 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:30.536 ************************************ 00:04:30.536 START TEST acl 00:04:30.536 ************************************ 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:04:30.536 * Looking for test storage... 00:04:30.536 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:30.536 21:11:05 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:30.536 21:11:05 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:04:30.536 21:11:05 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:04:30.536 21:11:05 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:04:30.536 21:11:05 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:04:30.536 21:11:05 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:04:30.536 21:11:05 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:04:30.536 21:11:05 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:30.536 21:11:05 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:31.913 21:11:06 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:04:31.913 21:11:06 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:04:31.913 21:11:06 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:31.913 21:11:06 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:04:31.913 21:11:06 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:04:31.913 21:11:06 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:33.288 Hugepages 00:04:33.288 node hugesize free / total 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 00:04:33.288 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:04:33.288 21:11:07 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:04:33.288 21:11:07 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:33.288 21:11:07 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:33.288 21:11:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:33.288 ************************************ 00:04:33.288 START TEST denied 00:04:33.288 ************************************ 00:04:33.288 21:11:07 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:04:33.288 21:11:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:04:33.288 21:11:07 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:04:33.288 21:11:07 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:04:33.288 21:11:07 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:04:33.288 21:11:07 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:34.663 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:34.663 21:11:09 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:37.241 00:04:37.241 real 0m3.899s 00:04:37.241 user 0m1.145s 00:04:37.241 sys 0m1.851s 00:04:37.241 21:11:11 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:37.241 21:11:11 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:04:37.241 ************************************ 00:04:37.241 END TEST denied 00:04:37.241 ************************************ 00:04:37.241 21:11:11 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:37.241 21:11:11 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:04:37.241 21:11:11 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.241 21:11:11 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.241 21:11:11 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:37.241 ************************************ 00:04:37.241 START TEST allowed 00:04:37.241 ************************************ 00:04:37.241 21:11:11 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:04:37.241 21:11:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:04:37.241 21:11:11 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:04:37.241 21:11:11 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:04:37.241 21:11:11 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:04:37.241 21:11:11 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:39.780 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:39.780 21:11:14 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:04:39.780 21:11:14 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:04:39.780 21:11:14 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:04:39.780 21:11:14 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:39.780 21:11:14 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:41.162 00:04:41.162 real 0m3.699s 00:04:41.162 user 0m0.967s 00:04:41.162 sys 0m1.567s 00:04:41.162 21:11:15 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.162 21:11:15 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:04:41.162 ************************************ 00:04:41.162 END TEST allowed 00:04:41.162 ************************************ 00:04:41.162 21:11:15 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:04:41.162 00:04:41.162 real 0m10.407s 00:04:41.162 user 0m3.192s 00:04:41.162 sys 0m5.210s 00:04:41.162 21:11:15 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.162 21:11:15 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:04:41.162 ************************************ 00:04:41.162 END TEST acl 00:04:41.162 ************************************ 00:04:41.162 21:11:15 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:41.162 21:11:15 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:41.162 21:11:15 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.162 21:11:15 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.162 21:11:15 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:41.162 ************************************ 00:04:41.162 START TEST hugepages 00:04:41.162 ************************************ 00:04:41.162 21:11:15 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:04:41.162 * Looking for test storage... 00:04:41.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.162 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 41302856 kB' 'MemAvailable: 44804936 kB' 'Buffers: 2704 kB' 'Cached: 12718424 kB' 'SwapCached: 0 kB' 'Active: 9697564 kB' 'Inactive: 3500384 kB' 'Active(anon): 9302932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 480164 kB' 'Mapped: 167388 kB' 'Shmem: 8826112 kB' 'KReclaimable: 201372 kB' 'Slab: 565936 kB' 'SReclaimable: 201372 kB' 'SUnreclaim: 364564 kB' 'KernelStack: 12832 kB' 'PageTables: 8376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562312 kB' 'Committed_AS: 10426944 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195968 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.163 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:41.164 21:11:15 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:04:41.164 21:11:15 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.164 21:11:15 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.164 21:11:15 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:41.164 ************************************ 00:04:41.164 START TEST default_setup 00:04:41.164 ************************************ 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:04:41.164 21:11:15 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:42.544 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.544 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:04:42.544 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:04:43.486 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.486 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43401756 kB' 'MemAvailable: 46903848 kB' 'Buffers: 2704 kB' 'Cached: 12718520 kB' 'SwapCached: 0 kB' 'Active: 9716728 kB' 'Inactive: 3500384 kB' 'Active(anon): 9322096 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 499112 kB' 'Mapped: 167272 kB' 'Shmem: 8826208 kB' 'KReclaimable: 201396 kB' 'Slab: 565544 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364148 kB' 'KernelStack: 12736 kB' 'PageTables: 7752 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10416280 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195904 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.487 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.488 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43404296 kB' 'MemAvailable: 46906388 kB' 'Buffers: 2704 kB' 'Cached: 12718524 kB' 'SwapCached: 0 kB' 'Active: 9719504 kB' 'Inactive: 3500384 kB' 'Active(anon): 9324872 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 501936 kB' 'Mapped: 167192 kB' 'Shmem: 8826212 kB' 'KReclaimable: 201396 kB' 'Slab: 565536 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364140 kB' 'KernelStack: 12704 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10419476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195828 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.489 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.490 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43404472 kB' 'MemAvailable: 46906564 kB' 'Buffers: 2704 kB' 'Cached: 12718540 kB' 'SwapCached: 0 kB' 'Active: 9714356 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319724 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496736 kB' 'Mapped: 166440 kB' 'Shmem: 8826228 kB' 'KReclaimable: 201396 kB' 'Slab: 565584 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364188 kB' 'KernelStack: 12720 kB' 'PageTables: 7892 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413380 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195824 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.491 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.492 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:43.493 nr_hugepages=1024 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:43.493 resv_hugepages=0 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:43.493 surplus_hugepages=0 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:43.493 anon_hugepages=0 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:04:43.493 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43404792 kB' 'MemAvailable: 46906884 kB' 'Buffers: 2704 kB' 'Cached: 12718560 kB' 'SwapCached: 0 kB' 'Active: 9714356 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319724 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496768 kB' 'Mapped: 166408 kB' 'Shmem: 8826248 kB' 'KReclaimable: 201396 kB' 'Slab: 565584 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364188 kB' 'KernelStack: 12720 kB' 'PageTables: 7860 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413400 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195840 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.494 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.495 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25922916 kB' 'MemUsed: 6906968 kB' 'SwapCached: 0 kB' 'Active: 3724740 kB' 'Inactive: 89144 kB' 'Active(anon): 3555252 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611496 kB' 'Mapped: 59360 kB' 'AnonPages: 205592 kB' 'Shmem: 3352864 kB' 'KernelStack: 7384 kB' 'PageTables: 3992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329812 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214592 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.496 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:43.497 node0=1024 expecting 1024 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:43.497 00:04:43.497 real 0m2.450s 00:04:43.497 user 0m0.682s 00:04:43.497 sys 0m0.862s 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:43.497 21:11:18 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:04:43.497 ************************************ 00:04:43.497 END TEST default_setup 00:04:43.497 ************************************ 00:04:43.497 21:11:18 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:43.497 21:11:18 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:04:43.497 21:11:18 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:43.498 21:11:18 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:43.498 21:11:18 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:43.498 ************************************ 00:04:43.498 START TEST per_node_1G_alloc 00:04:43.498 ************************************ 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:43.498 21:11:18 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:44.880 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.880 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:44.880 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.880 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.880 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.880 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.880 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.880 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.880 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.880 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:44.880 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:44.880 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:44.880 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:44.880 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:44.880 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:44.880 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:44.880 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.880 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43411256 kB' 'MemAvailable: 46913348 kB' 'Buffers: 2704 kB' 'Cached: 12718636 kB' 'SwapCached: 0 kB' 'Active: 9714064 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319432 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496400 kB' 'Mapped: 166496 kB' 'Shmem: 8826324 kB' 'KReclaimable: 201396 kB' 'Slab: 565636 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364240 kB' 'KernelStack: 12704 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413456 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.881 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43411488 kB' 'MemAvailable: 46913580 kB' 'Buffers: 2704 kB' 'Cached: 12718640 kB' 'SwapCached: 0 kB' 'Active: 9714732 kB' 'Inactive: 3500384 kB' 'Active(anon): 9320100 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 497040 kB' 'Mapped: 166496 kB' 'Shmem: 8826328 kB' 'KReclaimable: 201396 kB' 'Slab: 565620 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364224 kB' 'KernelStack: 12768 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413476 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.882 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.883 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43412620 kB' 'MemAvailable: 46914712 kB' 'Buffers: 2704 kB' 'Cached: 12718640 kB' 'SwapCached: 0 kB' 'Active: 9714264 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319632 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496540 kB' 'Mapped: 166420 kB' 'Shmem: 8826328 kB' 'KReclaimable: 201396 kB' 'Slab: 565636 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364240 kB' 'KernelStack: 12752 kB' 'PageTables: 7828 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413496 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.884 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.885 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:44.886 nr_hugepages=1024 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:44.886 resv_hugepages=0 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:44.886 surplus_hugepages=0 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:44.886 anon_hugepages=0 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43413060 kB' 'MemAvailable: 46915152 kB' 'Buffers: 2704 kB' 'Cached: 12718680 kB' 'SwapCached: 0 kB' 'Active: 9714580 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319948 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496804 kB' 'Mapped: 166420 kB' 'Shmem: 8826368 kB' 'KReclaimable: 201396 kB' 'Slab: 565636 kB' 'SReclaimable: 201396 kB' 'SUnreclaim: 364240 kB' 'KernelStack: 12736 kB' 'PageTables: 7776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413520 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 195984 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.886 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.887 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26974708 kB' 'MemUsed: 5855176 kB' 'SwapCached: 0 kB' 'Active: 3724768 kB' 'Inactive: 89144 kB' 'Active(anon): 3555280 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611500 kB' 'Mapped: 59360 kB' 'AnonPages: 205556 kB' 'Shmem: 3352868 kB' 'KernelStack: 7432 kB' 'PageTables: 4008 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329688 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.888 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:44.889 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.148 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 16438276 kB' 'MemUsed: 11273564 kB' 'SwapCached: 0 kB' 'Active: 5989932 kB' 'Inactive: 3411240 kB' 'Active(anon): 5764788 kB' 'Inactive(anon): 0 kB' 'Active(file): 225144 kB' 'Inactive(file): 3411240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9109928 kB' 'Mapped: 107060 kB' 'AnonPages: 291324 kB' 'Shmem: 5473544 kB' 'KernelStack: 5336 kB' 'PageTables: 3872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86176 kB' 'Slab: 235948 kB' 'SReclaimable: 86176 kB' 'SUnreclaim: 149772 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.149 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:45.150 node0=512 expecting 512 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:45.150 node1=512 expecting 512 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:45.150 00:04:45.150 real 0m1.438s 00:04:45.150 user 0m0.630s 00:04:45.150 sys 0m0.769s 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:45.150 21:11:19 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:45.150 ************************************ 00:04:45.150 END TEST per_node_1G_alloc 00:04:45.150 ************************************ 00:04:45.150 21:11:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:45.150 21:11:19 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:04:45.150 21:11:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:45.150 21:11:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:45.150 21:11:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:45.150 ************************************ 00:04:45.150 START TEST even_2G_alloc 00:04:45.150 ************************************ 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:45.150 21:11:19 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:46.088 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.088 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:46.088 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.088 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.088 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.089 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.089 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.089 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.089 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.089 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:46.089 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:46.089 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:46.089 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:46.089 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:46.089 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:46.089 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:46.089 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43418240 kB' 'MemAvailable: 46920320 kB' 'Buffers: 2704 kB' 'Cached: 12718768 kB' 'SwapCached: 0 kB' 'Active: 9714284 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319652 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496376 kB' 'Mapped: 166628 kB' 'Shmem: 8826456 kB' 'KReclaimable: 201372 kB' 'Slab: 565508 kB' 'SReclaimable: 201372 kB' 'SUnreclaim: 364136 kB' 'KernelStack: 12784 kB' 'PageTables: 7800 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413716 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.353 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43417736 kB' 'MemAvailable: 46919816 kB' 'Buffers: 2704 kB' 'Cached: 12718772 kB' 'SwapCached: 0 kB' 'Active: 9714032 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319400 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496076 kB' 'Mapped: 166656 kB' 'Shmem: 8826460 kB' 'KReclaimable: 201372 kB' 'Slab: 565568 kB' 'SReclaimable: 201372 kB' 'SUnreclaim: 364196 kB' 'KernelStack: 12752 kB' 'PageTables: 7724 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413736 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.354 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.355 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43418664 kB' 'MemAvailable: 46920744 kB' 'Buffers: 2704 kB' 'Cached: 12718788 kB' 'SwapCached: 0 kB' 'Active: 9713972 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319340 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496012 kB' 'Mapped: 166432 kB' 'Shmem: 8826476 kB' 'KReclaimable: 201372 kB' 'Slab: 565584 kB' 'SReclaimable: 201372 kB' 'SUnreclaim: 364212 kB' 'KernelStack: 12816 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.356 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.357 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:46.358 nr_hugepages=1024 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:46.358 resv_hugepages=0 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:46.358 surplus_hugepages=0 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:46.358 anon_hugepages=0 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.358 21:11:20 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43418664 kB' 'MemAvailable: 46920744 kB' 'Buffers: 2704 kB' 'Cached: 12718808 kB' 'SwapCached: 0 kB' 'Active: 9714276 kB' 'Inactive: 3500384 kB' 'Active(anon): 9319644 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 496292 kB' 'Mapped: 166432 kB' 'Shmem: 8826496 kB' 'KReclaimable: 201372 kB' 'Slab: 565584 kB' 'SReclaimable: 201372 kB' 'SUnreclaim: 364212 kB' 'KernelStack: 12816 kB' 'PageTables: 7868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10413776 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.358 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.359 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26973264 kB' 'MemUsed: 5856620 kB' 'SwapCached: 0 kB' 'Active: 3723764 kB' 'Inactive: 89144 kB' 'Active(anon): 3554276 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611504 kB' 'Mapped: 59360 kB' 'AnonPages: 204488 kB' 'Shmem: 3352872 kB' 'KernelStack: 7400 kB' 'PageTables: 3912 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329692 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214472 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.360 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 16444896 kB' 'MemUsed: 11266944 kB' 'SwapCached: 0 kB' 'Active: 5990264 kB' 'Inactive: 3411240 kB' 'Active(anon): 5765120 kB' 'Inactive(anon): 0 kB' 'Active(file): 225144 kB' 'Inactive(file): 3411240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9110052 kB' 'Mapped: 107072 kB' 'AnonPages: 291528 kB' 'Shmem: 5473668 kB' 'KernelStack: 5416 kB' 'PageTables: 3956 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86152 kB' 'Slab: 235892 kB' 'SReclaimable: 86152 kB' 'SUnreclaim: 149740 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.361 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:46.362 node0=512 expecting 512 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:46.362 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:46.363 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:46.363 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:04:46.363 node1=512 expecting 512 00:04:46.363 21:11:21 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:04:46.363 00:04:46.363 real 0m1.361s 00:04:46.363 user 0m0.572s 00:04:46.363 sys 0m0.748s 00:04:46.363 21:11:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.363 21:11:21 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:46.363 ************************************ 00:04:46.363 END TEST even_2G_alloc 00:04:46.363 ************************************ 00:04:46.363 21:11:21 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:46.363 21:11:21 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:04:46.363 21:11:21 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:46.363 21:11:21 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.363 21:11:21 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:46.621 ************************************ 00:04:46.621 START TEST odd_alloc 00:04:46.621 ************************************ 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:46.621 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:46.622 21:11:21 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.556 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.556 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:47.556 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.556 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.556 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.556 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.556 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.556 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.556 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.556 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:47.556 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:47.556 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:47.556 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:47.556 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:47.556 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:47.556 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:47.556 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43401892 kB' 'MemAvailable: 46903956 kB' 'Buffers: 2704 kB' 'Cached: 12718908 kB' 'SwapCached: 0 kB' 'Active: 9711304 kB' 'Inactive: 3500384 kB' 'Active(anon): 9316672 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493288 kB' 'Mapped: 165744 kB' 'Shmem: 8826596 kB' 'KReclaimable: 201340 kB' 'Slab: 565772 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364432 kB' 'KernelStack: 12768 kB' 'PageTables: 7616 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10400692 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.820 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.821 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43402148 kB' 'MemAvailable: 46904212 kB' 'Buffers: 2704 kB' 'Cached: 12718912 kB' 'SwapCached: 0 kB' 'Active: 9711076 kB' 'Inactive: 3500384 kB' 'Active(anon): 9316444 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493068 kB' 'Mapped: 165580 kB' 'Shmem: 8826600 kB' 'KReclaimable: 201340 kB' 'Slab: 565772 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364432 kB' 'KernelStack: 12752 kB' 'PageTables: 7544 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10400708 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.822 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43401392 kB' 'MemAvailable: 46903456 kB' 'Buffers: 2704 kB' 'Cached: 12718924 kB' 'SwapCached: 0 kB' 'Active: 9711564 kB' 'Inactive: 3500384 kB' 'Active(anon): 9316932 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493992 kB' 'Mapped: 165580 kB' 'Shmem: 8826612 kB' 'KReclaimable: 201340 kB' 'Slab: 565772 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364432 kB' 'KernelStack: 12752 kB' 'PageTables: 7556 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10403108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.823 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.824 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:04:47.825 nr_hugepages=1025 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:47.825 resv_hugepages=0 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:47.825 surplus_hugepages=0 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:47.825 anon_hugepages=0 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43402320 kB' 'MemAvailable: 46904384 kB' 'Buffers: 2704 kB' 'Cached: 12718948 kB' 'SwapCached: 0 kB' 'Active: 9712252 kB' 'Inactive: 3500384 kB' 'Active(anon): 9317620 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494236 kB' 'Mapped: 165580 kB' 'Shmem: 8826636 kB' 'KReclaimable: 201340 kB' 'Slab: 565772 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364432 kB' 'KernelStack: 13136 kB' 'PageTables: 8700 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609864 kB' 'Committed_AS: 10403120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196368 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.825 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.826 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26980072 kB' 'MemUsed: 5849812 kB' 'SwapCached: 0 kB' 'Active: 3726100 kB' 'Inactive: 89144 kB' 'Active(anon): 3556612 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611520 kB' 'Mapped: 58868 kB' 'AnonPages: 206836 kB' 'Shmem: 3352888 kB' 'KernelStack: 7832 kB' 'PageTables: 5960 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329660 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214440 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.827 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 16424064 kB' 'MemUsed: 11287776 kB' 'SwapCached: 0 kB' 'Active: 5986736 kB' 'Inactive: 3411240 kB' 'Active(anon): 5761592 kB' 'Inactive(anon): 0 kB' 'Active(file): 225144 kB' 'Inactive(file): 3411240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9110164 kB' 'Mapped: 106712 kB' 'AnonPages: 287928 kB' 'Shmem: 5473780 kB' 'KernelStack: 5304 kB' 'PageTables: 3396 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86120 kB' 'Slab: 236104 kB' 'SReclaimable: 86120 kB' 'SUnreclaim: 149984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.828 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.829 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:04:47.829 node0=512 expecting 513 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:04:47.830 node1=513 expecting 512 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:04:47.830 00:04:47.830 real 0m1.435s 00:04:47.830 user 0m0.633s 00:04:47.830 sys 0m0.765s 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.830 21:11:22 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:47.830 ************************************ 00:04:47.830 END TEST odd_alloc 00:04:47.830 ************************************ 00:04:47.830 21:11:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:47.830 21:11:22 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:04:47.830 21:11:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:47.830 21:11:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.830 21:11:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:48.090 ************************************ 00:04:48.090 START TEST custom_alloc 00:04:48.090 ************************************ 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:48.090 21:11:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:49.027 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.027 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:49.028 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.028 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.028 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.028 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.028 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.028 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.028 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.028 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:49.028 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:49.028 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:49.028 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:49.028 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:49.028 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:49.028 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:49.028 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 42339992 kB' 'MemAvailable: 45842056 kB' 'Buffers: 2704 kB' 'Cached: 12719036 kB' 'SwapCached: 0 kB' 'Active: 9716720 kB' 'Inactive: 3500384 kB' 'Active(anon): 9322088 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498136 kB' 'Mapped: 166444 kB' 'Shmem: 8826724 kB' 'KReclaimable: 201340 kB' 'Slab: 565892 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364552 kB' 'KernelStack: 12720 kB' 'PageTables: 7400 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10407068 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196084 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.028 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.292 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 42340492 kB' 'MemAvailable: 45842556 kB' 'Buffers: 2704 kB' 'Cached: 12719040 kB' 'SwapCached: 0 kB' 'Active: 9712908 kB' 'Inactive: 3500384 kB' 'Active(anon): 9318276 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 494824 kB' 'Mapped: 166156 kB' 'Shmem: 8826728 kB' 'KReclaimable: 201340 kB' 'Slab: 565868 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364528 kB' 'KernelStack: 12752 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10403508 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.293 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 42334504 kB' 'MemAvailable: 45836568 kB' 'Buffers: 2704 kB' 'Cached: 12719056 kB' 'SwapCached: 0 kB' 'Active: 9716556 kB' 'Inactive: 3500384 kB' 'Active(anon): 9321924 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498404 kB' 'Mapped: 166156 kB' 'Shmem: 8826744 kB' 'KReclaimable: 201340 kB' 'Slab: 565928 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364588 kB' 'KernelStack: 12752 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10407108 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.294 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.295 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:04:49.296 nr_hugepages=1536 00:04:49.296 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:49.297 resv_hugepages=0 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:49.297 surplus_hugepages=0 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:49.297 anon_hugepages=0 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 42335360 kB' 'MemAvailable: 45837424 kB' 'Buffers: 2704 kB' 'Cached: 12719076 kB' 'SwapCached: 0 kB' 'Active: 9716764 kB' 'Inactive: 3500384 kB' 'Active(anon): 9322132 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 498556 kB' 'Mapped: 166376 kB' 'Shmem: 8826764 kB' 'KReclaimable: 201340 kB' 'Slab: 565928 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364588 kB' 'KernelStack: 12752 kB' 'PageTables: 7504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086600 kB' 'Committed_AS: 10407128 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196036 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.297 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:49.298 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 26961100 kB' 'MemUsed: 5868784 kB' 'SwapCached: 0 kB' 'Active: 3724436 kB' 'Inactive: 89144 kB' 'Active(anon): 3554948 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611524 kB' 'Mapped: 58868 kB' 'AnonPages: 205172 kB' 'Shmem: 3352892 kB' 'KernelStack: 7432 kB' 'PageTables: 3988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329824 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214604 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.299 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27711840 kB' 'MemFree: 15374020 kB' 'MemUsed: 12337820 kB' 'SwapCached: 0 kB' 'Active: 5986860 kB' 'Inactive: 3411240 kB' 'Active(anon): 5761716 kB' 'Inactive(anon): 0 kB' 'Active(file): 225144 kB' 'Inactive(file): 3411240 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 9110260 kB' 'Mapped: 106724 kB' 'AnonPages: 287956 kB' 'Shmem: 5473876 kB' 'KernelStack: 5320 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 86120 kB' 'Slab: 236104 kB' 'SReclaimable: 86120 kB' 'SUnreclaim: 149984 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.300 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:04:49.301 node0=512 expecting 512 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:04:49.301 node1=1024 expecting 1024 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:04:49.301 00:04:49.301 real 0m1.323s 00:04:49.301 user 0m0.541s 00:04:49.301 sys 0m0.743s 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.301 21:11:23 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:49.301 ************************************ 00:04:49.301 END TEST custom_alloc 00:04:49.301 ************************************ 00:04:49.301 21:11:23 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:49.301 21:11:23 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:04:49.301 21:11:23 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.301 21:11:23 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.301 21:11:23 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:49.301 ************************************ 00:04:49.301 START TEST no_shrink_alloc 00:04:49.301 ************************************ 00:04:49.301 21:11:23 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:04:49.301 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:49.302 21:11:23 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:50.685 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.686 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:50.686 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.686 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.686 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.686 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.686 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.686 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.686 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.686 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:50.686 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:50.686 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:50.686 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:50.686 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:50.686 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:50.686 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:50.686 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43366816 kB' 'MemAvailable: 46868880 kB' 'Buffers: 2704 kB' 'Cached: 12719156 kB' 'SwapCached: 0 kB' 'Active: 9711816 kB' 'Inactive: 3500384 kB' 'Active(anon): 9317184 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493544 kB' 'Mapped: 165604 kB' 'Shmem: 8826844 kB' 'KReclaimable: 201340 kB' 'Slab: 565828 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364488 kB' 'KernelStack: 12768 kB' 'PageTables: 7532 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401272 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.686 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43367452 kB' 'MemAvailable: 46869516 kB' 'Buffers: 2704 kB' 'Cached: 12719160 kB' 'SwapCached: 0 kB' 'Active: 9711512 kB' 'Inactive: 3500384 kB' 'Active(anon): 9316880 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493232 kB' 'Mapped: 165600 kB' 'Shmem: 8826848 kB' 'KReclaimable: 201340 kB' 'Slab: 565804 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364464 kB' 'KernelStack: 12768 kB' 'PageTables: 7460 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401288 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.687 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.688 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43367944 kB' 'MemAvailable: 46870008 kB' 'Buffers: 2704 kB' 'Cached: 12719180 kB' 'SwapCached: 0 kB' 'Active: 9711520 kB' 'Inactive: 3500384 kB' 'Active(anon): 9316888 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493248 kB' 'Mapped: 165600 kB' 'Shmem: 8826868 kB' 'KReclaimable: 201340 kB' 'Slab: 565892 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364552 kB' 'KernelStack: 12784 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401312 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.689 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.690 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:50.691 nr_hugepages=1024 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:50.691 resv_hugepages=0 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:50.691 surplus_hugepages=0 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:50.691 anon_hugepages=0 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43367944 kB' 'MemAvailable: 46870008 kB' 'Buffers: 2704 kB' 'Cached: 12719200 kB' 'SwapCached: 0 kB' 'Active: 9711532 kB' 'Inactive: 3500384 kB' 'Active(anon): 9316900 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493248 kB' 'Mapped: 165600 kB' 'Shmem: 8826888 kB' 'KReclaimable: 201340 kB' 'Slab: 565892 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364552 kB' 'KernelStack: 12784 kB' 'PageTables: 7536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401332 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.691 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.692 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25933360 kB' 'MemUsed: 6896524 kB' 'SwapCached: 0 kB' 'Active: 3724248 kB' 'Inactive: 89144 kB' 'Active(anon): 3554760 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611528 kB' 'Mapped: 58868 kB' 'AnonPages: 205064 kB' 'Shmem: 3352896 kB' 'KernelStack: 7384 kB' 'PageTables: 3928 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329800 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214580 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.693 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:50.694 node0=1024 expecting 1024 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:04:50.694 21:11:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:52.135 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.135 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:04:52.135 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.135 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.135 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.135 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.135 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.135 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.135 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.135 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:04:52.135 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:04:52.135 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:04:52.135 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:04:52.135 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:04:52.135 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:04:52.135 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:04:52.135 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:04:52.136 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43377928 kB' 'MemAvailable: 46879992 kB' 'Buffers: 2704 kB' 'Cached: 12719276 kB' 'SwapCached: 0 kB' 'Active: 9712304 kB' 'Inactive: 3500384 kB' 'Active(anon): 9317672 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493940 kB' 'Mapped: 165660 kB' 'Shmem: 8826964 kB' 'KReclaimable: 201340 kB' 'Slab: 565600 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364260 kB' 'KernelStack: 12784 kB' 'PageTables: 7552 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401516 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.136 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.137 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43382468 kB' 'MemAvailable: 46884532 kB' 'Buffers: 2704 kB' 'Cached: 12719280 kB' 'SwapCached: 0 kB' 'Active: 9711812 kB' 'Inactive: 3500384 kB' 'Active(anon): 9317180 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493412 kB' 'Mapped: 165612 kB' 'Shmem: 8826968 kB' 'KReclaimable: 201340 kB' 'Slab: 565600 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364260 kB' 'KernelStack: 12784 kB' 'PageTables: 7472 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401536 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.138 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.139 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43381952 kB' 'MemAvailable: 46884016 kB' 'Buffers: 2704 kB' 'Cached: 12719296 kB' 'SwapCached: 0 kB' 'Active: 9711788 kB' 'Inactive: 3500384 kB' 'Active(anon): 9317156 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493420 kB' 'Mapped: 165612 kB' 'Shmem: 8826984 kB' 'KReclaimable: 201340 kB' 'Slab: 565640 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364300 kB' 'KernelStack: 12800 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401556 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.140 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.141 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.142 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:04:52.143 nr_hugepages=1024 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:04:52.143 resv_hugepages=0 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:04:52.143 surplus_hugepages=0 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:04:52.143 anon_hugepages=0 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541724 kB' 'MemFree: 43381952 kB' 'MemAvailable: 46884016 kB' 'Buffers: 2704 kB' 'Cached: 12719316 kB' 'SwapCached: 0 kB' 'Active: 9712104 kB' 'Inactive: 3500384 kB' 'Active(anon): 9317472 kB' 'Inactive(anon): 0 kB' 'Active(file): 394632 kB' 'Inactive(file): 3500384 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 493716 kB' 'Mapped: 165612 kB' 'Shmem: 8827004 kB' 'KReclaimable: 201340 kB' 'Slab: 565640 kB' 'SReclaimable: 201340 kB' 'SUnreclaim: 364300 kB' 'KernelStack: 12800 kB' 'PageTables: 7548 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610888 kB' 'Committed_AS: 10401580 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196016 kB' 'VmallocChunk: 0 kB' 'Percpu: 33792 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1777244 kB' 'DirectMap2M: 12822528 kB' 'DirectMap1G: 54525952 kB' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.143 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.144 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32829884 kB' 'MemFree: 25933356 kB' 'MemUsed: 6896528 kB' 'SwapCached: 0 kB' 'Active: 3725196 kB' 'Inactive: 89144 kB' 'Active(anon): 3555708 kB' 'Inactive(anon): 0 kB' 'Active(file): 169488 kB' 'Inactive(file): 89144 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 3611544 kB' 'Mapped: 58868 kB' 'AnonPages: 205972 kB' 'Shmem: 3352912 kB' 'KernelStack: 7464 kB' 'PageTables: 4100 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 115220 kB' 'Slab: 329772 kB' 'SReclaimable: 115220 kB' 'SUnreclaim: 214552 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.145 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.146 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:04:52.147 node0=1024 expecting 1024 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:04:52.147 00:04:52.147 real 0m2.844s 00:04:52.147 user 0m1.194s 00:04:52.147 sys 0m1.575s 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.147 21:11:26 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:04:52.147 ************************************ 00:04:52.147 END TEST no_shrink_alloc 00:04:52.147 ************************************ 00:04:52.147 21:11:26 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:04:52.147 21:11:26 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:04:52.147 00:04:52.147 real 0m11.247s 00:04:52.147 user 0m4.409s 00:04:52.147 sys 0m5.722s 00:04:52.147 21:11:26 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:52.147 21:11:26 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:04:52.147 ************************************ 00:04:52.147 END TEST hugepages 00:04:52.147 ************************************ 00:04:52.147 21:11:26 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:52.147 21:11:26 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.147 21:11:26 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:52.147 21:11:26 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:52.147 21:11:26 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:52.405 ************************************ 00:04:52.405 START TEST driver 00:04:52.405 ************************************ 00:04:52.405 21:11:26 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:04:52.405 * Looking for test storage... 00:04:52.405 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:52.406 21:11:26 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:04:52.406 21:11:26 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:52.406 21:11:26 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:54.939 21:11:29 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:04:54.939 21:11:29 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.939 21:11:29 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.939 21:11:29 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:54.939 ************************************ 00:04:54.939 START TEST guess_driver 00:04:54.939 ************************************ 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:04:54.940 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:04:54.940 Looking for driver=vfio-pci 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:04:54.940 21:11:29 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:55.877 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:56.135 21:11:30 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:57.073 21:11:31 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:59.605 00:04:59.605 real 0m4.723s 00:04:59.605 user 0m1.084s 00:04:59.605 sys 0m1.748s 00:04:59.605 21:11:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.605 21:11:34 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.605 ************************************ 00:04:59.605 END TEST guess_driver 00:04:59.605 ************************************ 00:04:59.605 21:11:34 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:04:59.605 00:04:59.605 real 0m7.290s 00:04:59.605 user 0m1.643s 00:04:59.605 sys 0m2.772s 00:04:59.605 21:11:34 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.605 21:11:34 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:04:59.605 ************************************ 00:04:59.605 END TEST driver 00:04:59.605 ************************************ 00:04:59.605 21:11:34 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:59.605 21:11:34 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:59.605 21:11:34 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:59.605 21:11:34 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.605 21:11:34 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:59.605 ************************************ 00:04:59.605 START TEST devices 00:04:59.605 ************************************ 00:04:59.605 21:11:34 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:04:59.605 * Looking for test storage... 00:04:59.605 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:04:59.605 21:11:34 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:04:59.605 21:11:34 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:04:59.605 21:11:34 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:04:59.605 21:11:34 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:05:00.977 21:11:35 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:05:00.977 21:11:35 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:05:00.977 21:11:35 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:05:00.977 21:11:35 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:05:01.235 No valid GPT data, bailing 00:05:01.235 21:11:35 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:05:01.235 21:11:35 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:05:01.235 21:11:35 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:05:01.235 21:11:35 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:05:01.235 21:11:35 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:05:01.235 21:11:35 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:05:01.235 21:11:35 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:05:01.235 21:11:35 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:01.235 21:11:35 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.235 21:11:35 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:01.235 ************************************ 00:05:01.235 START TEST nvme_mount 00:05:01.235 ************************************ 00:05:01.235 21:11:35 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:05:01.235 21:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:05:01.235 21:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:05:01.235 21:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:01.236 21:11:35 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:05:02.172 Creating new GPT entries in memory. 00:05:02.172 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:02.172 other utilities. 00:05:02.172 21:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:02.172 21:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:02.172 21:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:02.172 21:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:02.172 21:11:36 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:03.109 Creating new GPT entries in memory. 00:05:03.109 The operation has completed successfully. 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 761726 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:05:03.109 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:03.367 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:03.368 21:11:37 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:04.303 21:11:38 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:04.564 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:04.564 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:04.856 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:04.856 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:04.856 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:04.856 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:04.856 21:11:39 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:06.262 21:11:40 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.200 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:07.201 21:11:41 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:07.461 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:07.461 00:05:07.461 real 0m6.260s 00:05:07.461 user 0m1.521s 00:05:07.461 sys 0m2.276s 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:07.461 21:11:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:05:07.461 ************************************ 00:05:07.461 END TEST nvme_mount 00:05:07.461 ************************************ 00:05:07.461 21:11:42 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:07.461 21:11:42 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:05:07.461 21:11:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:07.461 21:11:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:07.461 21:11:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:07.461 ************************************ 00:05:07.461 START TEST dm_mount 00:05:07.461 ************************************ 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:05:07.461 21:11:42 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:05:08.399 Creating new GPT entries in memory. 00:05:08.399 GPT data structures destroyed! You may now partition the disk using fdisk or 00:05:08.399 other utilities. 00:05:08.399 21:11:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:05:08.399 21:11:43 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:08.399 21:11:43 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:08.399 21:11:43 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:08.399 21:11:43 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:05:09.779 Creating new GPT entries in memory. 00:05:09.779 The operation has completed successfully. 00:05:09.779 21:11:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:09.779 21:11:44 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:09.779 21:11:44 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:05:09.779 21:11:44 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:05:09.779 21:11:44 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:05:10.719 The operation has completed successfully. 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 764114 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:10.719 21:11:45 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:11.656 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:05:11.914 21:11:46 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:05:12.851 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:05:13.111 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:05:13.111 00:05:13.111 real 0m5.618s 00:05:13.111 user 0m0.961s 00:05:13.111 sys 0m1.520s 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.111 21:11:47 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:05:13.111 ************************************ 00:05:13.111 END TEST dm_mount 00:05:13.111 ************************************ 00:05:13.111 21:11:47 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.111 21:11:47 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:05:13.371 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:05:13.371 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:05:13.371 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:05:13.371 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:05:13.371 21:11:48 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:05:13.371 00:05:13.371 real 0m13.801s 00:05:13.371 user 0m3.136s 00:05:13.371 sys 0m4.820s 00:05:13.371 21:11:48 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.371 21:11:48 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:05:13.371 ************************************ 00:05:13.371 END TEST devices 00:05:13.371 ************************************ 00:05:13.371 21:11:48 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:05:13.371 00:05:13.371 real 0m42.985s 00:05:13.371 user 0m12.473s 00:05:13.371 sys 0m18.687s 00:05:13.371 21:11:48 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.371 21:11:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:05:13.371 ************************************ 00:05:13.371 END TEST setup.sh 00:05:13.371 ************************************ 00:05:13.371 21:11:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.371 21:11:48 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:05:14.748 Hugepages 00:05:14.748 node hugesize free / total 00:05:14.748 node0 1048576kB 0 / 0 00:05:14.748 node0 2048kB 2048 / 2048 00:05:14.748 node1 1048576kB 0 / 0 00:05:14.748 node1 2048kB 0 / 0 00:05:14.748 00:05:14.748 Type BDF Vendor Device NUMA Driver Device Block devices 00:05:14.748 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:05:14.748 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:05:14.748 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:05:14.748 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:05:14.748 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:05:14.749 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:05:14.749 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:05:14.749 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:05:14.749 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:05:14.749 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:05:14.749 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:05:14.749 21:11:49 -- spdk/autotest.sh@130 -- # uname -s 00:05:14.749 21:11:49 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:05:14.749 21:11:49 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:05:14.749 21:11:49 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:15.685 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:15.685 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:15.945 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:16.886 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:16.886 21:11:51 -- common/autotest_common.sh@1532 -- # sleep 1 00:05:17.825 21:11:52 -- common/autotest_common.sh@1533 -- # bdfs=() 00:05:17.825 21:11:52 -- common/autotest_common.sh@1533 -- # local bdfs 00:05:17.825 21:11:52 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:05:17.825 21:11:52 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:05:17.825 21:11:52 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:17.825 21:11:52 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:17.825 21:11:52 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:17.825 21:11:52 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:17.825 21:11:52 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:18.085 21:11:52 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:18.085 21:11:52 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:18.085 21:11:52 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:05:19.023 Waiting for block devices as requested 00:05:19.023 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:05:19.282 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:19.282 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:19.541 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:19.542 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:19.542 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:19.542 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:19.801 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:19.801 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:19.801 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:05:19.801 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:05:20.060 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:05:20.060 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:05:20.060 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:05:20.060 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:05:20.319 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:05:20.319 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:05:20.319 21:11:55 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:05:20.319 21:11:55 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:05:20.319 21:11:55 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:05:20.320 21:11:55 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:05:20.320 21:11:55 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:05:20.320 21:11:55 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1545 -- # grep oacs 00:05:20.320 21:11:55 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:05:20.320 21:11:55 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:05:20.320 21:11:55 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:05:20.320 21:11:55 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:05:20.320 21:11:55 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:05:20.320 21:11:55 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:05:20.320 21:11:55 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:05:20.320 21:11:55 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:05:20.320 21:11:55 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:05:20.320 21:11:55 -- common/autotest_common.sh@1557 -- # continue 00:05:20.320 21:11:55 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:05:20.320 21:11:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:20.320 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:05:20.578 21:11:55 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:05:20.578 21:11:55 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:20.578 21:11:55 -- common/autotest_common.sh@10 -- # set +x 00:05:20.578 21:11:55 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:05:21.518 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:21.518 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:21.518 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:21.518 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:21.518 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:21.518 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:21.518 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:21.777 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:05:21.777 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:05:22.712 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:05:22.712 21:11:57 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:05:22.712 21:11:57 -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:22.712 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:05:22.712 21:11:57 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:05:22.712 21:11:57 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:05:22.712 21:11:57 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:05:22.712 21:11:57 -- common/autotest_common.sh@1577 -- # bdfs=() 00:05:22.712 21:11:57 -- common/autotest_common.sh@1577 -- # local bdfs 00:05:22.712 21:11:57 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:05:22.712 21:11:57 -- common/autotest_common.sh@1513 -- # bdfs=() 00:05:22.712 21:11:57 -- common/autotest_common.sh@1513 -- # local bdfs 00:05:22.712 21:11:57 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:05:22.712 21:11:57 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:05:22.712 21:11:57 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:05:22.971 21:11:57 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:05:22.971 21:11:57 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:05:22.971 21:11:57 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:05:22.971 21:11:57 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:05:22.971 21:11:57 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:05:22.971 21:11:57 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:05:22.971 21:11:57 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:05:22.971 21:11:57 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:05:22.971 21:11:57 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:05:22.971 21:11:57 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=769288 00:05:22.971 21:11:57 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.971 21:11:57 -- common/autotest_common.sh@1598 -- # waitforlisten 769288 00:05:22.971 21:11:57 -- common/autotest_common.sh@829 -- # '[' -z 769288 ']' 00:05:22.971 21:11:57 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.971 21:11:57 -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.971 21:11:57 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.971 21:11:57 -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.971 21:11:57 -- common/autotest_common.sh@10 -- # set +x 00:05:22.971 [2024-07-11 21:11:57.546777] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:22.971 [2024-07-11 21:11:57.546873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid769288 ] 00:05:22.971 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.971 [2024-07-11 21:11:57.610964] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.971 [2024-07-11 21:11:57.700369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.228 21:11:57 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.228 21:11:57 -- common/autotest_common.sh@862 -- # return 0 00:05:23.228 21:11:57 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:05:23.228 21:11:57 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:05:23.228 21:11:57 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:05:26.517 nvme0n1 00:05:26.517 21:12:01 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:26.517 [2024-07-11 21:12:01.282032] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:26.518 [2024-07-11 21:12:01.282088] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:26.518 request: 00:05:26.518 { 00:05:26.518 "nvme_ctrlr_name": "nvme0", 00:05:26.518 "password": "test", 00:05:26.518 "method": "bdev_nvme_opal_revert", 00:05:26.518 "req_id": 1 00:05:26.518 } 00:05:26.518 Got JSON-RPC error response 00:05:26.518 response: 00:05:26.518 { 00:05:26.518 "code": -32603, 00:05:26.518 "message": "Internal error" 00:05:26.518 } 00:05:26.776 21:12:01 -- common/autotest_common.sh@1604 -- # true 00:05:26.776 21:12:01 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:05:26.776 21:12:01 -- common/autotest_common.sh@1608 -- # killprocess 769288 00:05:26.776 21:12:01 -- common/autotest_common.sh@948 -- # '[' -z 769288 ']' 00:05:26.776 21:12:01 -- common/autotest_common.sh@952 -- # kill -0 769288 00:05:26.776 21:12:01 -- common/autotest_common.sh@953 -- # uname 00:05:26.776 21:12:01 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.776 21:12:01 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 769288 00:05:26.776 21:12:01 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:26.776 21:12:01 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:26.776 21:12:01 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 769288' 00:05:26.776 killing process with pid 769288 00:05:26.776 21:12:01 -- common/autotest_common.sh@967 -- # kill 769288 00:05:26.776 21:12:01 -- common/autotest_common.sh@972 -- # wait 769288 00:05:28.684 21:12:03 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:05:28.684 21:12:03 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:05:28.684 21:12:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:28.684 21:12:03 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:05:28.684 21:12:03 -- spdk/autotest.sh@162 -- # timing_enter lib 00:05:28.684 21:12:03 -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:28.684 21:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:28.684 21:12:03 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:05:28.684 21:12:03 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:28.684 21:12:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.684 21:12:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.684 21:12:03 -- common/autotest_common.sh@10 -- # set +x 00:05:28.684 ************************************ 00:05:28.684 START TEST env 00:05:28.684 ************************************ 00:05:28.684 21:12:03 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:28.684 * Looking for test storage... 00:05:28.684 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:28.684 21:12:03 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:28.684 21:12:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.684 21:12:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.684 21:12:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.684 ************************************ 00:05:28.684 START TEST env_memory 00:05:28.684 ************************************ 00:05:28.684 21:12:03 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:28.684 00:05:28.684 00:05:28.684 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.684 http://cunit.sourceforge.net/ 00:05:28.684 00:05:28.684 00:05:28.684 Suite: memory 00:05:28.684 Test: alloc and free memory map ...[2024-07-11 21:12:03.188723] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:28.684 passed 00:05:28.684 Test: mem map translation ...[2024-07-11 21:12:03.209486] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:28.684 [2024-07-11 21:12:03.209507] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:28.684 [2024-07-11 21:12:03.209565] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:28.684 [2024-07-11 21:12:03.209577] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:28.684 passed 00:05:28.684 Test: mem map registration ...[2024-07-11 21:12:03.253259] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:05:28.684 [2024-07-11 21:12:03.253280] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:05:28.684 passed 00:05:28.685 Test: mem map adjacent registrations ...passed 00:05:28.685 00:05:28.685 Run Summary: Type Total Ran Passed Failed Inactive 00:05:28.685 suites 1 1 n/a 0 0 00:05:28.685 tests 4 4 4 0 0 00:05:28.685 asserts 152 152 152 0 n/a 00:05:28.685 00:05:28.685 Elapsed time = 0.148 seconds 00:05:28.685 00:05:28.685 real 0m0.156s 00:05:28.685 user 0m0.149s 00:05:28.685 sys 0m0.006s 00:05:28.685 21:12:03 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:28.685 21:12:03 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:28.685 ************************************ 00:05:28.685 END TEST env_memory 00:05:28.685 ************************************ 00:05:28.685 21:12:03 env -- common/autotest_common.sh@1142 -- # return 0 00:05:28.685 21:12:03 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:28.685 21:12:03 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:28.685 21:12:03 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:28.685 21:12:03 env -- common/autotest_common.sh@10 -- # set +x 00:05:28.685 ************************************ 00:05:28.685 START TEST env_vtophys 00:05:28.685 ************************************ 00:05:28.685 21:12:03 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:28.685 EAL: lib.eal log level changed from notice to debug 00:05:28.685 EAL: Detected lcore 0 as core 0 on socket 0 00:05:28.685 EAL: Detected lcore 1 as core 1 on socket 0 00:05:28.685 EAL: Detected lcore 2 as core 2 on socket 0 00:05:28.685 EAL: Detected lcore 3 as core 3 on socket 0 00:05:28.685 EAL: Detected lcore 4 as core 4 on socket 0 00:05:28.685 EAL: Detected lcore 5 as core 5 on socket 0 00:05:28.685 EAL: Detected lcore 6 as core 8 on socket 0 00:05:28.685 EAL: Detected lcore 7 as core 9 on socket 0 00:05:28.685 EAL: Detected lcore 8 as core 10 on socket 0 00:05:28.685 EAL: Detected lcore 9 as core 11 on socket 0 00:05:28.685 EAL: Detected lcore 10 as core 12 on socket 0 00:05:28.685 EAL: Detected lcore 11 as core 13 on socket 0 00:05:28.685 EAL: Detected lcore 12 as core 0 on socket 1 00:05:28.685 EAL: Detected lcore 13 as core 1 on socket 1 00:05:28.685 EAL: Detected lcore 14 as core 2 on socket 1 00:05:28.685 EAL: Detected lcore 15 as core 3 on socket 1 00:05:28.685 EAL: Detected lcore 16 as core 4 on socket 1 00:05:28.685 EAL: Detected lcore 17 as core 5 on socket 1 00:05:28.685 EAL: Detected lcore 18 as core 8 on socket 1 00:05:28.685 EAL: Detected lcore 19 as core 9 on socket 1 00:05:28.685 EAL: Detected lcore 20 as core 10 on socket 1 00:05:28.685 EAL: Detected lcore 21 as core 11 on socket 1 00:05:28.685 EAL: Detected lcore 22 as core 12 on socket 1 00:05:28.685 EAL: Detected lcore 23 as core 13 on socket 1 00:05:28.685 EAL: Detected lcore 24 as core 0 on socket 0 00:05:28.685 EAL: Detected lcore 25 as core 1 on socket 0 00:05:28.685 EAL: Detected lcore 26 as core 2 on socket 0 00:05:28.685 EAL: Detected lcore 27 as core 3 on socket 0 00:05:28.685 EAL: Detected lcore 28 as core 4 on socket 0 00:05:28.685 EAL: Detected lcore 29 as core 5 on socket 0 00:05:28.685 EAL: Detected lcore 30 as core 8 on socket 0 00:05:28.685 EAL: Detected lcore 31 as core 9 on socket 0 00:05:28.685 EAL: Detected lcore 32 as core 10 on socket 0 00:05:28.685 EAL: Detected lcore 33 as core 11 on socket 0 00:05:28.685 EAL: Detected lcore 34 as core 12 on socket 0 00:05:28.685 EAL: Detected lcore 35 as core 13 on socket 0 00:05:28.685 EAL: Detected lcore 36 as core 0 on socket 1 00:05:28.685 EAL: Detected lcore 37 as core 1 on socket 1 00:05:28.685 EAL: Detected lcore 38 as core 2 on socket 1 00:05:28.685 EAL: Detected lcore 39 as core 3 on socket 1 00:05:28.685 EAL: Detected lcore 40 as core 4 on socket 1 00:05:28.685 EAL: Detected lcore 41 as core 5 on socket 1 00:05:28.685 EAL: Detected lcore 42 as core 8 on socket 1 00:05:28.685 EAL: Detected lcore 43 as core 9 on socket 1 00:05:28.685 EAL: Detected lcore 44 as core 10 on socket 1 00:05:28.685 EAL: Detected lcore 45 as core 11 on socket 1 00:05:28.685 EAL: Detected lcore 46 as core 12 on socket 1 00:05:28.685 EAL: Detected lcore 47 as core 13 on socket 1 00:05:28.685 EAL: Maximum logical cores by configuration: 128 00:05:28.685 EAL: Detected CPU lcores: 48 00:05:28.685 EAL: Detected NUMA nodes: 2 00:05:28.685 EAL: Checking presence of .so 'librte_eal.so.24.0' 00:05:28.685 EAL: Detected shared linkage of DPDK 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so.24.0 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so.24.0 00:05:28.685 EAL: Registered [vdev] bus. 00:05:28.685 EAL: bus.vdev log level changed from disabled to notice 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so.24.0 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so.24.0 00:05:28.685 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:28.685 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_pci.so 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_bus_vdev.so 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_mempool_ring.so 00:05:28.685 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-24.0/librte_net_i40e.so 00:05:28.685 EAL: No shared files mode enabled, IPC will be disabled 00:05:28.685 EAL: No shared files mode enabled, IPC is disabled 00:05:28.685 EAL: Bus pci wants IOVA as 'DC' 00:05:28.685 EAL: Bus vdev wants IOVA as 'DC' 00:05:28.685 EAL: Buses did not request a specific IOVA mode. 00:05:28.685 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:28.685 EAL: Selected IOVA mode 'VA' 00:05:28.685 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.685 EAL: Probing VFIO support... 00:05:28.685 EAL: IOMMU type 1 (Type 1) is supported 00:05:28.685 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:28.685 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:28.685 EAL: VFIO support initialized 00:05:28.685 EAL: Ask a virtual area of 0x2e000 bytes 00:05:28.685 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:28.685 EAL: Setting up physically contiguous memory... 00:05:28.685 EAL: Setting maximum number of open files to 524288 00:05:28.685 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:28.685 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:28.685 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:28.685 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:28.685 EAL: Ask a virtual area of 0x61000 bytes 00:05:28.685 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:28.685 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:28.685 EAL: Ask a virtual area of 0x400000000 bytes 00:05:28.685 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:28.685 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:28.685 EAL: Hugepages will be freed exactly as allocated. 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: TSC frequency is ~2700000 KHz 00:05:28.686 EAL: Main lcore 0 is ready (tid=7fec98117a00;cpuset=[0]) 00:05:28.686 EAL: Trying to obtain current memory policy. 00:05:28.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.686 EAL: Restoring previous memory policy: 0 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was expanded by 2MB 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:28.686 EAL: Mem event callback 'spdk:(nil)' registered 00:05:28.686 00:05:28.686 00:05:28.686 CUnit - A unit testing framework for C - Version 2.1-3 00:05:28.686 http://cunit.sourceforge.net/ 00:05:28.686 00:05:28.686 00:05:28.686 Suite: components_suite 00:05:28.686 Test: vtophys_malloc_test ...passed 00:05:28.686 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:28.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.686 EAL: Restoring previous memory policy: 4 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was expanded by 4MB 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was shrunk by 4MB 00:05:28.686 EAL: Trying to obtain current memory policy. 00:05:28.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.686 EAL: Restoring previous memory policy: 4 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was expanded by 6MB 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was shrunk by 6MB 00:05:28.686 EAL: Trying to obtain current memory policy. 00:05:28.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.686 EAL: Restoring previous memory policy: 4 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was expanded by 10MB 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was shrunk by 10MB 00:05:28.686 EAL: Trying to obtain current memory policy. 00:05:28.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.686 EAL: Restoring previous memory policy: 4 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was expanded by 18MB 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was shrunk by 18MB 00:05:28.686 EAL: Trying to obtain current memory policy. 00:05:28.686 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.686 EAL: Restoring previous memory policy: 4 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.686 EAL: request: mp_malloc_sync 00:05:28.686 EAL: No shared files mode enabled, IPC is disabled 00:05:28.686 EAL: Heap on socket 0 was expanded by 34MB 00:05:28.686 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.966 EAL: request: mp_malloc_sync 00:05:28.966 EAL: No shared files mode enabled, IPC is disabled 00:05:28.966 EAL: Heap on socket 0 was shrunk by 34MB 00:05:28.966 EAL: Trying to obtain current memory policy. 00:05:28.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.966 EAL: Restoring previous memory policy: 4 00:05:28.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.966 EAL: request: mp_malloc_sync 00:05:28.966 EAL: No shared files mode enabled, IPC is disabled 00:05:28.966 EAL: Heap on socket 0 was expanded by 66MB 00:05:28.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.966 EAL: request: mp_malloc_sync 00:05:28.966 EAL: No shared files mode enabled, IPC is disabled 00:05:28.966 EAL: Heap on socket 0 was shrunk by 66MB 00:05:28.966 EAL: Trying to obtain current memory policy. 00:05:28.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.966 EAL: Restoring previous memory policy: 4 00:05:28.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.966 EAL: request: mp_malloc_sync 00:05:28.966 EAL: No shared files mode enabled, IPC is disabled 00:05:28.966 EAL: Heap on socket 0 was expanded by 130MB 00:05:28.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.966 EAL: request: mp_malloc_sync 00:05:28.966 EAL: No shared files mode enabled, IPC is disabled 00:05:28.966 EAL: Heap on socket 0 was shrunk by 130MB 00:05:28.966 EAL: Trying to obtain current memory policy. 00:05:28.966 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:28.966 EAL: Restoring previous memory policy: 4 00:05:28.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:28.966 EAL: request: mp_malloc_sync 00:05:28.966 EAL: No shared files mode enabled, IPC is disabled 00:05:28.966 EAL: Heap on socket 0 was expanded by 258MB 00:05:28.966 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.226 EAL: request: mp_malloc_sync 00:05:29.226 EAL: No shared files mode enabled, IPC is disabled 00:05:29.226 EAL: Heap on socket 0 was shrunk by 258MB 00:05:29.226 EAL: Trying to obtain current memory policy. 00:05:29.226 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.226 EAL: Restoring previous memory policy: 4 00:05:29.226 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.226 EAL: request: mp_malloc_sync 00:05:29.226 EAL: No shared files mode enabled, IPC is disabled 00:05:29.226 EAL: Heap on socket 0 was expanded by 514MB 00:05:29.486 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.486 EAL: request: mp_malloc_sync 00:05:29.486 EAL: No shared files mode enabled, IPC is disabled 00:05:29.486 EAL: Heap on socket 0 was shrunk by 514MB 00:05:29.486 EAL: Trying to obtain current memory policy. 00:05:29.486 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:29.745 EAL: Restoring previous memory policy: 4 00:05:29.745 EAL: Calling mem event callback 'spdk:(nil)' 00:05:29.745 EAL: request: mp_malloc_sync 00:05:29.745 EAL: No shared files mode enabled, IPC is disabled 00:05:29.745 EAL: Heap on socket 0 was expanded by 1026MB 00:05:30.005 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.265 EAL: request: mp_malloc_sync 00:05:30.265 EAL: No shared files mode enabled, IPC is disabled 00:05:30.265 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:30.265 passed 00:05:30.265 00:05:30.265 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.265 suites 1 1 n/a 0 0 00:05:30.265 tests 2 2 2 0 0 00:05:30.265 asserts 497 497 497 0 n/a 00:05:30.265 00:05:30.265 Elapsed time = 1.373 seconds 00:05:30.265 EAL: Calling mem event callback 'spdk:(nil)' 00:05:30.265 EAL: request: mp_malloc_sync 00:05:30.265 EAL: No shared files mode enabled, IPC is disabled 00:05:30.265 EAL: Heap on socket 0 was shrunk by 2MB 00:05:30.265 EAL: No shared files mode enabled, IPC is disabled 00:05:30.265 EAL: No shared files mode enabled, IPC is disabled 00:05:30.265 EAL: No shared files mode enabled, IPC is disabled 00:05:30.265 00:05:30.265 real 0m1.493s 00:05:30.265 user 0m0.857s 00:05:30.265 sys 0m0.601s 00:05:30.265 21:12:04 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.265 21:12:04 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:30.265 ************************************ 00:05:30.265 END TEST env_vtophys 00:05:30.265 ************************************ 00:05:30.265 21:12:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.265 21:12:04 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.265 21:12:04 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:30.265 21:12:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.265 21:12:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.265 ************************************ 00:05:30.265 START TEST env_pci 00:05:30.265 ************************************ 00:05:30.265 21:12:04 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:30.265 00:05:30.265 00:05:30.265 CUnit - A unit testing framework for C - Version 2.1-3 00:05:30.265 http://cunit.sourceforge.net/ 00:05:30.265 00:05:30.265 00:05:30.265 Suite: pci 00:05:30.265 Test: pci_hook ...[2024-07-11 21:12:04.895861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 770299 has claimed it 00:05:30.265 EAL: Cannot find device (10000:00:01.0) 00:05:30.265 EAL: Failed to attach device on primary process 00:05:30.265 passed 00:05:30.265 00:05:30.265 Run Summary: Type Total Ran Passed Failed Inactive 00:05:30.265 suites 1 1 n/a 0 0 00:05:30.265 tests 1 1 1 0 0 00:05:30.265 asserts 25 25 25 0 n/a 00:05:30.265 00:05:30.266 Elapsed time = 0.021 seconds 00:05:30.266 00:05:30.266 real 0m0.033s 00:05:30.266 user 0m0.011s 00:05:30.266 sys 0m0.022s 00:05:30.266 21:12:04 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.266 21:12:04 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:30.266 ************************************ 00:05:30.266 END TEST env_pci 00:05:30.266 ************************************ 00:05:30.266 21:12:04 env -- common/autotest_common.sh@1142 -- # return 0 00:05:30.266 21:12:04 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:30.266 21:12:04 env -- env/env.sh@15 -- # uname 00:05:30.266 21:12:04 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:30.266 21:12:04 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:30.266 21:12:04 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.266 21:12:04 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:30.266 21:12:04 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.266 21:12:04 env -- common/autotest_common.sh@10 -- # set +x 00:05:30.266 ************************************ 00:05:30.266 START TEST env_dpdk_post_init 00:05:30.266 ************************************ 00:05:30.266 21:12:04 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:30.266 EAL: Detected CPU lcores: 48 00:05:30.266 EAL: Detected NUMA nodes: 2 00:05:30.266 EAL: Detected shared linkage of DPDK 00:05:30.266 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:30.266 EAL: Selected IOVA mode 'VA' 00:05:30.266 EAL: No free 2048 kB hugepages reported on node 1 00:05:30.266 EAL: VFIO support initialized 00:05:30.266 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:30.538 EAL: Using IOMMU type 1 (Type 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:05:30.538 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:05:31.479 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:05:34.769 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:05:34.769 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:05:34.769 Starting DPDK initialization... 00:05:34.769 Starting SPDK post initialization... 00:05:34.769 SPDK NVMe probe 00:05:34.769 Attaching to 0000:88:00.0 00:05:34.769 Attached to 0000:88:00.0 00:05:34.769 Cleaning up... 00:05:34.769 00:05:34.769 real 0m4.383s 00:05:34.769 user 0m3.263s 00:05:34.769 sys 0m0.182s 00:05:34.769 21:12:09 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.769 21:12:09 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:34.769 ************************************ 00:05:34.769 END TEST env_dpdk_post_init 00:05:34.769 ************************************ 00:05:34.769 21:12:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.769 21:12:09 env -- env/env.sh@26 -- # uname 00:05:34.769 21:12:09 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:34.769 21:12:09 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.769 21:12:09 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.769 21:12:09 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.769 21:12:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.769 ************************************ 00:05:34.769 START TEST env_mem_callbacks 00:05:34.769 ************************************ 00:05:34.769 21:12:09 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:34.769 EAL: Detected CPU lcores: 48 00:05:34.769 EAL: Detected NUMA nodes: 2 00:05:34.769 EAL: Detected shared linkage of DPDK 00:05:34.769 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:34.769 EAL: Selected IOVA mode 'VA' 00:05:34.769 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.769 EAL: VFIO support initialized 00:05:34.769 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:34.769 00:05:34.769 00:05:34.769 CUnit - A unit testing framework for C - Version 2.1-3 00:05:34.769 http://cunit.sourceforge.net/ 00:05:34.769 00:05:34.769 00:05:34.769 Suite: memory 00:05:34.769 Test: test ... 00:05:34.769 register 0x200000200000 2097152 00:05:34.769 malloc 3145728 00:05:34.769 register 0x200000400000 4194304 00:05:34.769 buf 0x200000500000 len 3145728 PASSED 00:05:34.769 malloc 64 00:05:34.769 buf 0x2000004fff40 len 64 PASSED 00:05:34.769 malloc 4194304 00:05:34.769 register 0x200000800000 6291456 00:05:34.769 buf 0x200000a00000 len 4194304 PASSED 00:05:34.769 free 0x200000500000 3145728 00:05:34.769 free 0x2000004fff40 64 00:05:34.769 unregister 0x200000400000 4194304 PASSED 00:05:34.769 free 0x200000a00000 4194304 00:05:34.769 unregister 0x200000800000 6291456 PASSED 00:05:34.769 malloc 8388608 00:05:34.769 register 0x200000400000 10485760 00:05:34.769 buf 0x200000600000 len 8388608 PASSED 00:05:34.769 free 0x200000600000 8388608 00:05:34.769 unregister 0x200000400000 10485760 PASSED 00:05:34.769 passed 00:05:34.769 00:05:34.769 Run Summary: Type Total Ran Passed Failed Inactive 00:05:34.769 suites 1 1 n/a 0 0 00:05:34.769 tests 1 1 1 0 0 00:05:34.769 asserts 15 15 15 0 n/a 00:05:34.769 00:05:34.769 Elapsed time = 0.005 seconds 00:05:34.769 00:05:34.769 real 0m0.048s 00:05:34.769 user 0m0.012s 00:05:34.769 sys 0m0.036s 00:05:34.769 21:12:09 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.769 21:12:09 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:34.769 ************************************ 00:05:34.769 END TEST env_mem_callbacks 00:05:34.769 ************************************ 00:05:34.769 21:12:09 env -- common/autotest_common.sh@1142 -- # return 0 00:05:34.769 00:05:34.769 real 0m6.371s 00:05:34.769 user 0m4.396s 00:05:34.769 sys 0m1.019s 00:05:34.769 21:12:09 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.769 21:12:09 env -- common/autotest_common.sh@10 -- # set +x 00:05:34.769 ************************************ 00:05:34.769 END TEST env 00:05:34.769 ************************************ 00:05:34.769 21:12:09 -- common/autotest_common.sh@1142 -- # return 0 00:05:34.769 21:12:09 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:34.769 21:12:09 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:34.769 21:12:09 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.769 21:12:09 -- common/autotest_common.sh@10 -- # set +x 00:05:34.769 ************************************ 00:05:34.769 START TEST rpc 00:05:34.769 ************************************ 00:05:34.769 21:12:09 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:35.028 * Looking for test storage... 00:05:35.028 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.028 21:12:09 rpc -- rpc/rpc.sh@65 -- # spdk_pid=771454 00:05:35.028 21:12:09 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:35.028 21:12:09 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:35.028 21:12:09 rpc -- rpc/rpc.sh@67 -- # waitforlisten 771454 00:05:35.028 21:12:09 rpc -- common/autotest_common.sh@829 -- # '[' -z 771454 ']' 00:05:35.028 21:12:09 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.028 21:12:09 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.028 21:12:09 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.028 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.028 21:12:09 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.028 21:12:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.028 [2024-07-11 21:12:09.596217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:35.028 [2024-07-11 21:12:09.596311] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771454 ] 00:05:35.028 EAL: No free 2048 kB hugepages reported on node 1 00:05:35.028 [2024-07-11 21:12:09.653716] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:35.028 [2024-07-11 21:12:09.739937] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:35.028 [2024-07-11 21:12:09.739992] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 771454' to capture a snapshot of events at runtime. 00:05:35.028 [2024-07-11 21:12:09.740021] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:35.028 [2024-07-11 21:12:09.740033] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:35.028 [2024-07-11 21:12:09.740053] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid771454 for offline analysis/debug. 00:05:35.028 [2024-07-11 21:12:09.740079] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.286 21:12:09 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:35.286 21:12:09 rpc -- common/autotest_common.sh@862 -- # return 0 00:05:35.286 21:12:09 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.286 21:12:09 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:35.286 21:12:09 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:35.286 21:12:09 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:35.286 21:12:09 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.286 21:12:09 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.286 21:12:09 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.286 ************************************ 00:05:35.287 START TEST rpc_integrity 00:05:35.287 ************************************ 00:05:35.287 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:35.287 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:35.287 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.287 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.287 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.287 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:35.287 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:35.545 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:35.545 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:35.545 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.545 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.545 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.545 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:35.545 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:35.545 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.545 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.545 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.545 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:35.545 { 00:05:35.545 "name": "Malloc0", 00:05:35.545 "aliases": [ 00:05:35.545 "1b6bc7de-d2b3-48a3-88a7-065ea87ba221" 00:05:35.545 ], 00:05:35.545 "product_name": "Malloc disk", 00:05:35.545 "block_size": 512, 00:05:35.545 "num_blocks": 16384, 00:05:35.545 "uuid": "1b6bc7de-d2b3-48a3-88a7-065ea87ba221", 00:05:35.545 "assigned_rate_limits": { 00:05:35.545 "rw_ios_per_sec": 0, 00:05:35.545 "rw_mbytes_per_sec": 0, 00:05:35.545 "r_mbytes_per_sec": 0, 00:05:35.545 "w_mbytes_per_sec": 0 00:05:35.545 }, 00:05:35.545 "claimed": false, 00:05:35.545 "zoned": false, 00:05:35.545 "supported_io_types": { 00:05:35.545 "read": true, 00:05:35.545 "write": true, 00:05:35.545 "unmap": true, 00:05:35.545 "flush": true, 00:05:35.545 "reset": true, 00:05:35.545 "nvme_admin": false, 00:05:35.545 "nvme_io": false, 00:05:35.545 "nvme_io_md": false, 00:05:35.545 "write_zeroes": true, 00:05:35.546 "zcopy": true, 00:05:35.546 "get_zone_info": false, 00:05:35.546 "zone_management": false, 00:05:35.546 "zone_append": false, 00:05:35.546 "compare": false, 00:05:35.546 "compare_and_write": false, 00:05:35.546 "abort": true, 00:05:35.546 "seek_hole": false, 00:05:35.546 "seek_data": false, 00:05:35.546 "copy": true, 00:05:35.546 "nvme_iov_md": false 00:05:35.546 }, 00:05:35.546 "memory_domains": [ 00:05:35.546 { 00:05:35.546 "dma_device_id": "system", 00:05:35.546 "dma_device_type": 1 00:05:35.546 }, 00:05:35.546 { 00:05:35.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.546 "dma_device_type": 2 00:05:35.546 } 00:05:35.546 ], 00:05:35.546 "driver_specific": {} 00:05:35.546 } 00:05:35.546 ]' 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 [2024-07-11 21:12:10.137623] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:35.546 [2024-07-11 21:12:10.137668] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:35.546 [2024-07-11 21:12:10.137692] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x768bb0 00:05:35.546 [2024-07-11 21:12:10.137707] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:35.546 [2024-07-11 21:12:10.139214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:35.546 [2024-07-11 21:12:10.139246] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:35.546 Passthru0 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:35.546 { 00:05:35.546 "name": "Malloc0", 00:05:35.546 "aliases": [ 00:05:35.546 "1b6bc7de-d2b3-48a3-88a7-065ea87ba221" 00:05:35.546 ], 00:05:35.546 "product_name": "Malloc disk", 00:05:35.546 "block_size": 512, 00:05:35.546 "num_blocks": 16384, 00:05:35.546 "uuid": "1b6bc7de-d2b3-48a3-88a7-065ea87ba221", 00:05:35.546 "assigned_rate_limits": { 00:05:35.546 "rw_ios_per_sec": 0, 00:05:35.546 "rw_mbytes_per_sec": 0, 00:05:35.546 "r_mbytes_per_sec": 0, 00:05:35.546 "w_mbytes_per_sec": 0 00:05:35.546 }, 00:05:35.546 "claimed": true, 00:05:35.546 "claim_type": "exclusive_write", 00:05:35.546 "zoned": false, 00:05:35.546 "supported_io_types": { 00:05:35.546 "read": true, 00:05:35.546 "write": true, 00:05:35.546 "unmap": true, 00:05:35.546 "flush": true, 00:05:35.546 "reset": true, 00:05:35.546 "nvme_admin": false, 00:05:35.546 "nvme_io": false, 00:05:35.546 "nvme_io_md": false, 00:05:35.546 "write_zeroes": true, 00:05:35.546 "zcopy": true, 00:05:35.546 "get_zone_info": false, 00:05:35.546 "zone_management": false, 00:05:35.546 "zone_append": false, 00:05:35.546 "compare": false, 00:05:35.546 "compare_and_write": false, 00:05:35.546 "abort": true, 00:05:35.546 "seek_hole": false, 00:05:35.546 "seek_data": false, 00:05:35.546 "copy": true, 00:05:35.546 "nvme_iov_md": false 00:05:35.546 }, 00:05:35.546 "memory_domains": [ 00:05:35.546 { 00:05:35.546 "dma_device_id": "system", 00:05:35.546 "dma_device_type": 1 00:05:35.546 }, 00:05:35.546 { 00:05:35.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.546 "dma_device_type": 2 00:05:35.546 } 00:05:35.546 ], 00:05:35.546 "driver_specific": {} 00:05:35.546 }, 00:05:35.546 { 00:05:35.546 "name": "Passthru0", 00:05:35.546 "aliases": [ 00:05:35.546 "77eca7c0-0af0-56cd-9b52-543d5caa1dfe" 00:05:35.546 ], 00:05:35.546 "product_name": "passthru", 00:05:35.546 "block_size": 512, 00:05:35.546 "num_blocks": 16384, 00:05:35.546 "uuid": "77eca7c0-0af0-56cd-9b52-543d5caa1dfe", 00:05:35.546 "assigned_rate_limits": { 00:05:35.546 "rw_ios_per_sec": 0, 00:05:35.546 "rw_mbytes_per_sec": 0, 00:05:35.546 "r_mbytes_per_sec": 0, 00:05:35.546 "w_mbytes_per_sec": 0 00:05:35.546 }, 00:05:35.546 "claimed": false, 00:05:35.546 "zoned": false, 00:05:35.546 "supported_io_types": { 00:05:35.546 "read": true, 00:05:35.546 "write": true, 00:05:35.546 "unmap": true, 00:05:35.546 "flush": true, 00:05:35.546 "reset": true, 00:05:35.546 "nvme_admin": false, 00:05:35.546 "nvme_io": false, 00:05:35.546 "nvme_io_md": false, 00:05:35.546 "write_zeroes": true, 00:05:35.546 "zcopy": true, 00:05:35.546 "get_zone_info": false, 00:05:35.546 "zone_management": false, 00:05:35.546 "zone_append": false, 00:05:35.546 "compare": false, 00:05:35.546 "compare_and_write": false, 00:05:35.546 "abort": true, 00:05:35.546 "seek_hole": false, 00:05:35.546 "seek_data": false, 00:05:35.546 "copy": true, 00:05:35.546 "nvme_iov_md": false 00:05:35.546 }, 00:05:35.546 "memory_domains": [ 00:05:35.546 { 00:05:35.546 "dma_device_id": "system", 00:05:35.546 "dma_device_type": 1 00:05:35.546 }, 00:05:35.546 { 00:05:35.546 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.546 "dma_device_type": 2 00:05:35.546 } 00:05:35.546 ], 00:05:35.546 "driver_specific": { 00:05:35.546 "passthru": { 00:05:35.546 "name": "Passthru0", 00:05:35.546 "base_bdev_name": "Malloc0" 00:05:35.546 } 00:05:35.546 } 00:05:35.546 } 00:05:35.546 ]' 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:35.546 21:12:10 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:35.546 00:05:35.546 real 0m0.234s 00:05:35.546 user 0m0.150s 00:05:35.546 sys 0m0.022s 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 ************************************ 00:05:35.546 END TEST rpc_integrity 00:05:35.546 ************************************ 00:05:35.546 21:12:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.546 21:12:10 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:35.546 21:12:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.546 21:12:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.546 21:12:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.546 ************************************ 00:05:35.546 START TEST rpc_plugins 00:05:35.546 ************************************ 00:05:35.546 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:05:35.546 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:35.546 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.546 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.804 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:35.804 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.804 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:35.804 { 00:05:35.804 "name": "Malloc1", 00:05:35.804 "aliases": [ 00:05:35.804 "433ced4c-024f-4954-900f-4be18744d3c8" 00:05:35.804 ], 00:05:35.804 "product_name": "Malloc disk", 00:05:35.804 "block_size": 4096, 00:05:35.804 "num_blocks": 256, 00:05:35.804 "uuid": "433ced4c-024f-4954-900f-4be18744d3c8", 00:05:35.804 "assigned_rate_limits": { 00:05:35.804 "rw_ios_per_sec": 0, 00:05:35.804 "rw_mbytes_per_sec": 0, 00:05:35.804 "r_mbytes_per_sec": 0, 00:05:35.804 "w_mbytes_per_sec": 0 00:05:35.804 }, 00:05:35.804 "claimed": false, 00:05:35.804 "zoned": false, 00:05:35.804 "supported_io_types": { 00:05:35.804 "read": true, 00:05:35.804 "write": true, 00:05:35.804 "unmap": true, 00:05:35.804 "flush": true, 00:05:35.804 "reset": true, 00:05:35.804 "nvme_admin": false, 00:05:35.804 "nvme_io": false, 00:05:35.804 "nvme_io_md": false, 00:05:35.804 "write_zeroes": true, 00:05:35.804 "zcopy": true, 00:05:35.804 "get_zone_info": false, 00:05:35.804 "zone_management": false, 00:05:35.804 "zone_append": false, 00:05:35.804 "compare": false, 00:05:35.804 "compare_and_write": false, 00:05:35.804 "abort": true, 00:05:35.804 "seek_hole": false, 00:05:35.804 "seek_data": false, 00:05:35.804 "copy": true, 00:05:35.804 "nvme_iov_md": false 00:05:35.804 }, 00:05:35.804 "memory_domains": [ 00:05:35.804 { 00:05:35.804 "dma_device_id": "system", 00:05:35.804 "dma_device_type": 1 00:05:35.804 }, 00:05:35.804 { 00:05:35.804 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:35.804 "dma_device_type": 2 00:05:35.804 } 00:05:35.804 ], 00:05:35.804 "driver_specific": {} 00:05:35.804 } 00:05:35.804 ]' 00:05:35.804 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:35.804 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:35.804 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.804 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.805 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:35.805 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.805 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.805 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.805 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:35.805 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:35.805 21:12:10 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:35.805 00:05:35.805 real 0m0.124s 00:05:35.805 user 0m0.079s 00:05:35.805 sys 0m0.008s 00:05:35.805 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.805 21:12:10 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:35.805 ************************************ 00:05:35.805 END TEST rpc_plugins 00:05:35.805 ************************************ 00:05:35.805 21:12:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:35.805 21:12:10 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:35.805 21:12:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.805 21:12:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.805 21:12:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.805 ************************************ 00:05:35.805 START TEST rpc_trace_cmd_test 00:05:35.805 ************************************ 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:35.805 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid771454", 00:05:35.805 "tpoint_group_mask": "0x8", 00:05:35.805 "iscsi_conn": { 00:05:35.805 "mask": "0x2", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "scsi": { 00:05:35.805 "mask": "0x4", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "bdev": { 00:05:35.805 "mask": "0x8", 00:05:35.805 "tpoint_mask": "0xffffffffffffffff" 00:05:35.805 }, 00:05:35.805 "nvmf_rdma": { 00:05:35.805 "mask": "0x10", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "nvmf_tcp": { 00:05:35.805 "mask": "0x20", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "ftl": { 00:05:35.805 "mask": "0x40", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "blobfs": { 00:05:35.805 "mask": "0x80", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "dsa": { 00:05:35.805 "mask": "0x200", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "thread": { 00:05:35.805 "mask": "0x400", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "nvme_pcie": { 00:05:35.805 "mask": "0x800", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "iaa": { 00:05:35.805 "mask": "0x1000", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "nvme_tcp": { 00:05:35.805 "mask": "0x2000", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "bdev_nvme": { 00:05:35.805 "mask": "0x4000", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 }, 00:05:35.805 "sock": { 00:05:35.805 "mask": "0x8000", 00:05:35.805 "tpoint_mask": "0x0" 00:05:35.805 } 00:05:35.805 }' 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:35.805 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:36.064 00:05:36.064 real 0m0.203s 00:05:36.064 user 0m0.184s 00:05:36.064 sys 0m0.010s 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.064 21:12:10 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:36.064 ************************************ 00:05:36.064 END TEST rpc_trace_cmd_test 00:05:36.064 ************************************ 00:05:36.064 21:12:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.064 21:12:10 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:36.064 21:12:10 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:36.064 21:12:10 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:36.064 21:12:10 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.064 21:12:10 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.064 21:12:10 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.064 ************************************ 00:05:36.064 START TEST rpc_daemon_integrity 00:05:36.064 ************************************ 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:36.064 { 00:05:36.064 "name": "Malloc2", 00:05:36.064 "aliases": [ 00:05:36.064 "32986085-4109-4271-a7d9-74d7c2d7aab1" 00:05:36.064 ], 00:05:36.064 "product_name": "Malloc disk", 00:05:36.064 "block_size": 512, 00:05:36.064 "num_blocks": 16384, 00:05:36.064 "uuid": "32986085-4109-4271-a7d9-74d7c2d7aab1", 00:05:36.064 "assigned_rate_limits": { 00:05:36.064 "rw_ios_per_sec": 0, 00:05:36.064 "rw_mbytes_per_sec": 0, 00:05:36.064 "r_mbytes_per_sec": 0, 00:05:36.064 "w_mbytes_per_sec": 0 00:05:36.064 }, 00:05:36.064 "claimed": false, 00:05:36.064 "zoned": false, 00:05:36.064 "supported_io_types": { 00:05:36.064 "read": true, 00:05:36.064 "write": true, 00:05:36.064 "unmap": true, 00:05:36.064 "flush": true, 00:05:36.064 "reset": true, 00:05:36.064 "nvme_admin": false, 00:05:36.064 "nvme_io": false, 00:05:36.064 "nvme_io_md": false, 00:05:36.064 "write_zeroes": true, 00:05:36.064 "zcopy": true, 00:05:36.064 "get_zone_info": false, 00:05:36.064 "zone_management": false, 00:05:36.064 "zone_append": false, 00:05:36.064 "compare": false, 00:05:36.064 "compare_and_write": false, 00:05:36.064 "abort": true, 00:05:36.064 "seek_hole": false, 00:05:36.064 "seek_data": false, 00:05:36.064 "copy": true, 00:05:36.064 "nvme_iov_md": false 00:05:36.064 }, 00:05:36.064 "memory_domains": [ 00:05:36.064 { 00:05:36.064 "dma_device_id": "system", 00:05:36.064 "dma_device_type": 1 00:05:36.064 }, 00:05:36.064 { 00:05:36.064 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.064 "dma_device_type": 2 00:05:36.064 } 00:05:36.064 ], 00:05:36.064 "driver_specific": {} 00:05:36.064 } 00:05:36.064 ]' 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.064 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.064 [2024-07-11 21:12:10.831652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:36.064 [2024-07-11 21:12:10.831694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:36.064 [2024-07-11 21:12:10.831721] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x7696f0 00:05:36.064 [2024-07-11 21:12:10.831738] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:36.064 [2024-07-11 21:12:10.833114] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:36.064 [2024-07-11 21:12:10.833157] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:36.324 Passthru0 00:05:36.324 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.324 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:36.324 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.324 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.324 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.324 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:36.324 { 00:05:36.324 "name": "Malloc2", 00:05:36.324 "aliases": [ 00:05:36.324 "32986085-4109-4271-a7d9-74d7c2d7aab1" 00:05:36.324 ], 00:05:36.324 "product_name": "Malloc disk", 00:05:36.324 "block_size": 512, 00:05:36.324 "num_blocks": 16384, 00:05:36.324 "uuid": "32986085-4109-4271-a7d9-74d7c2d7aab1", 00:05:36.324 "assigned_rate_limits": { 00:05:36.324 "rw_ios_per_sec": 0, 00:05:36.324 "rw_mbytes_per_sec": 0, 00:05:36.324 "r_mbytes_per_sec": 0, 00:05:36.324 "w_mbytes_per_sec": 0 00:05:36.324 }, 00:05:36.324 "claimed": true, 00:05:36.324 "claim_type": "exclusive_write", 00:05:36.324 "zoned": false, 00:05:36.324 "supported_io_types": { 00:05:36.324 "read": true, 00:05:36.324 "write": true, 00:05:36.324 "unmap": true, 00:05:36.324 "flush": true, 00:05:36.324 "reset": true, 00:05:36.324 "nvme_admin": false, 00:05:36.324 "nvme_io": false, 00:05:36.324 "nvme_io_md": false, 00:05:36.324 "write_zeroes": true, 00:05:36.324 "zcopy": true, 00:05:36.324 "get_zone_info": false, 00:05:36.324 "zone_management": false, 00:05:36.324 "zone_append": false, 00:05:36.324 "compare": false, 00:05:36.324 "compare_and_write": false, 00:05:36.324 "abort": true, 00:05:36.324 "seek_hole": false, 00:05:36.324 "seek_data": false, 00:05:36.324 "copy": true, 00:05:36.324 "nvme_iov_md": false 00:05:36.324 }, 00:05:36.324 "memory_domains": [ 00:05:36.324 { 00:05:36.324 "dma_device_id": "system", 00:05:36.324 "dma_device_type": 1 00:05:36.324 }, 00:05:36.324 { 00:05:36.324 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.324 "dma_device_type": 2 00:05:36.324 } 00:05:36.324 ], 00:05:36.324 "driver_specific": {} 00:05:36.324 }, 00:05:36.324 { 00:05:36.324 "name": "Passthru0", 00:05:36.324 "aliases": [ 00:05:36.324 "cf1d9d03-3a10-5d81-931e-15661d9cb9ae" 00:05:36.324 ], 00:05:36.324 "product_name": "passthru", 00:05:36.324 "block_size": 512, 00:05:36.324 "num_blocks": 16384, 00:05:36.324 "uuid": "cf1d9d03-3a10-5d81-931e-15661d9cb9ae", 00:05:36.324 "assigned_rate_limits": { 00:05:36.324 "rw_ios_per_sec": 0, 00:05:36.325 "rw_mbytes_per_sec": 0, 00:05:36.325 "r_mbytes_per_sec": 0, 00:05:36.325 "w_mbytes_per_sec": 0 00:05:36.325 }, 00:05:36.325 "claimed": false, 00:05:36.325 "zoned": false, 00:05:36.325 "supported_io_types": { 00:05:36.325 "read": true, 00:05:36.325 "write": true, 00:05:36.325 "unmap": true, 00:05:36.325 "flush": true, 00:05:36.325 "reset": true, 00:05:36.325 "nvme_admin": false, 00:05:36.325 "nvme_io": false, 00:05:36.325 "nvme_io_md": false, 00:05:36.325 "write_zeroes": true, 00:05:36.325 "zcopy": true, 00:05:36.325 "get_zone_info": false, 00:05:36.325 "zone_management": false, 00:05:36.325 "zone_append": false, 00:05:36.325 "compare": false, 00:05:36.325 "compare_and_write": false, 00:05:36.325 "abort": true, 00:05:36.325 "seek_hole": false, 00:05:36.325 "seek_data": false, 00:05:36.325 "copy": true, 00:05:36.325 "nvme_iov_md": false 00:05:36.325 }, 00:05:36.325 "memory_domains": [ 00:05:36.325 { 00:05:36.325 "dma_device_id": "system", 00:05:36.325 "dma_device_type": 1 00:05:36.325 }, 00:05:36.325 { 00:05:36.325 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:36.325 "dma_device_type": 2 00:05:36.325 } 00:05:36.325 ], 00:05:36.325 "driver_specific": { 00:05:36.325 "passthru": { 00:05:36.325 "name": "Passthru0", 00:05:36.325 "base_bdev_name": "Malloc2" 00:05:36.325 } 00:05:36.325 } 00:05:36.325 } 00:05:36.325 ]' 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:36.325 00:05:36.325 real 0m0.229s 00:05:36.325 user 0m0.149s 00:05:36.325 sys 0m0.026s 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.325 21:12:10 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:36.325 ************************************ 00:05:36.325 END TEST rpc_daemon_integrity 00:05:36.325 ************************************ 00:05:36.325 21:12:10 rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.325 21:12:10 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:36.325 21:12:10 rpc -- rpc/rpc.sh@84 -- # killprocess 771454 00:05:36.325 21:12:10 rpc -- common/autotest_common.sh@948 -- # '[' -z 771454 ']' 00:05:36.325 21:12:10 rpc -- common/autotest_common.sh@952 -- # kill -0 771454 00:05:36.325 21:12:10 rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.325 21:12:10 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:36.325 21:12:10 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771454 00:05:36.325 21:12:11 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:36.325 21:12:11 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:36.325 21:12:11 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771454' 00:05:36.325 killing process with pid 771454 00:05:36.325 21:12:11 rpc -- common/autotest_common.sh@967 -- # kill 771454 00:05:36.325 21:12:11 rpc -- common/autotest_common.sh@972 -- # wait 771454 00:05:36.896 00:05:36.896 real 0m1.908s 00:05:36.896 user 0m2.398s 00:05:36.896 sys 0m0.600s 00:05:36.896 21:12:11 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.896 21:12:11 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.896 ************************************ 00:05:36.896 END TEST rpc 00:05:36.896 ************************************ 00:05:36.896 21:12:11 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.896 21:12:11 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:36.896 21:12:11 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.896 21:12:11 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.896 21:12:11 -- common/autotest_common.sh@10 -- # set +x 00:05:36.896 ************************************ 00:05:36.896 START TEST skip_rpc 00:05:36.896 ************************************ 00:05:36.896 21:12:11 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:36.896 * Looking for test storage... 00:05:36.896 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:36.896 21:12:11 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:36.896 21:12:11 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:36.896 21:12:11 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:36.896 21:12:11 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.896 21:12:11 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.896 21:12:11 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.896 ************************************ 00:05:36.896 START TEST skip_rpc 00:05:36.896 ************************************ 00:05:36.896 21:12:11 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:05:36.896 21:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=771895 00:05:36.896 21:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:36.896 21:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:36.896 21:12:11 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:36.896 [2024-07-11 21:12:11.581873] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:36.896 [2024-07-11 21:12:11.581947] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid771895 ] 00:05:36.896 EAL: No free 2048 kB hugepages reported on node 1 00:05:36.896 [2024-07-11 21:12:11.638731] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.155 [2024-07-11 21:12:11.728024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 771895 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 771895 ']' 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 771895 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 771895 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 771895' 00:05:42.432 killing process with pid 771895 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 771895 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 771895 00:05:42.432 00:05:42.432 real 0m5.443s 00:05:42.432 user 0m5.120s 00:05:42.432 sys 0m0.332s 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.432 21:12:16 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.432 ************************************ 00:05:42.432 END TEST skip_rpc 00:05:42.432 ************************************ 00:05:42.432 21:12:16 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:42.432 21:12:16 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:42.432 21:12:16 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.432 21:12:16 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.432 21:12:16 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.432 ************************************ 00:05:42.432 START TEST skip_rpc_with_json 00:05:42.432 ************************************ 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=772588 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 772588 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 772588 ']' 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.432 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.432 [2024-07-11 21:12:17.073569] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:42.432 [2024-07-11 21:12:17.073653] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid772588 ] 00:05:42.432 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.432 [2024-07-11 21:12:17.136029] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.691 [2024-07-11 21:12:17.232496] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.950 [2024-07-11 21:12:17.495048] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:42.950 request: 00:05:42.950 { 00:05:42.950 "trtype": "tcp", 00:05:42.950 "method": "nvmf_get_transports", 00:05:42.950 "req_id": 1 00:05:42.950 } 00:05:42.950 Got JSON-RPC error response 00:05:42.950 response: 00:05:42.950 { 00:05:42.950 "code": -19, 00:05:42.950 "message": "No such device" 00:05:42.950 } 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.950 [2024-07-11 21:12:17.503192] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:42.950 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:42.950 { 00:05:42.950 "subsystems": [ 00:05:42.950 { 00:05:42.950 "subsystem": "vfio_user_target", 00:05:42.951 "config": null 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "keyring", 00:05:42.951 "config": [] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "iobuf", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "iobuf_set_options", 00:05:42.951 "params": { 00:05:42.951 "small_pool_count": 8192, 00:05:42.951 "large_pool_count": 1024, 00:05:42.951 "small_bufsize": 8192, 00:05:42.951 "large_bufsize": 135168 00:05:42.951 } 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "sock", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "sock_set_default_impl", 00:05:42.951 "params": { 00:05:42.951 "impl_name": "posix" 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "sock_impl_set_options", 00:05:42.951 "params": { 00:05:42.951 "impl_name": "ssl", 00:05:42.951 "recv_buf_size": 4096, 00:05:42.951 "send_buf_size": 4096, 00:05:42.951 "enable_recv_pipe": true, 00:05:42.951 "enable_quickack": false, 00:05:42.951 "enable_placement_id": 0, 00:05:42.951 "enable_zerocopy_send_server": true, 00:05:42.951 "enable_zerocopy_send_client": false, 00:05:42.951 "zerocopy_threshold": 0, 00:05:42.951 "tls_version": 0, 00:05:42.951 "enable_ktls": false 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "sock_impl_set_options", 00:05:42.951 "params": { 00:05:42.951 "impl_name": "posix", 00:05:42.951 "recv_buf_size": 2097152, 00:05:42.951 "send_buf_size": 2097152, 00:05:42.951 "enable_recv_pipe": true, 00:05:42.951 "enable_quickack": false, 00:05:42.951 "enable_placement_id": 0, 00:05:42.951 "enable_zerocopy_send_server": true, 00:05:42.951 "enable_zerocopy_send_client": false, 00:05:42.951 "zerocopy_threshold": 0, 00:05:42.951 "tls_version": 0, 00:05:42.951 "enable_ktls": false 00:05:42.951 } 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "vmd", 00:05:42.951 "config": [] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "accel", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "accel_set_options", 00:05:42.951 "params": { 00:05:42.951 "small_cache_size": 128, 00:05:42.951 "large_cache_size": 16, 00:05:42.951 "task_count": 2048, 00:05:42.951 "sequence_count": 2048, 00:05:42.951 "buf_count": 2048 00:05:42.951 } 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "bdev", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "bdev_set_options", 00:05:42.951 "params": { 00:05:42.951 "bdev_io_pool_size": 65535, 00:05:42.951 "bdev_io_cache_size": 256, 00:05:42.951 "bdev_auto_examine": true, 00:05:42.951 "iobuf_small_cache_size": 128, 00:05:42.951 "iobuf_large_cache_size": 16 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "bdev_raid_set_options", 00:05:42.951 "params": { 00:05:42.951 "process_window_size_kb": 1024 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "bdev_iscsi_set_options", 00:05:42.951 "params": { 00:05:42.951 "timeout_sec": 30 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "bdev_nvme_set_options", 00:05:42.951 "params": { 00:05:42.951 "action_on_timeout": "none", 00:05:42.951 "timeout_us": 0, 00:05:42.951 "timeout_admin_us": 0, 00:05:42.951 "keep_alive_timeout_ms": 10000, 00:05:42.951 "arbitration_burst": 0, 00:05:42.951 "low_priority_weight": 0, 00:05:42.951 "medium_priority_weight": 0, 00:05:42.951 "high_priority_weight": 0, 00:05:42.951 "nvme_adminq_poll_period_us": 10000, 00:05:42.951 "nvme_ioq_poll_period_us": 0, 00:05:42.951 "io_queue_requests": 0, 00:05:42.951 "delay_cmd_submit": true, 00:05:42.951 "transport_retry_count": 4, 00:05:42.951 "bdev_retry_count": 3, 00:05:42.951 "transport_ack_timeout": 0, 00:05:42.951 "ctrlr_loss_timeout_sec": 0, 00:05:42.951 "reconnect_delay_sec": 0, 00:05:42.951 "fast_io_fail_timeout_sec": 0, 00:05:42.951 "disable_auto_failback": false, 00:05:42.951 "generate_uuids": false, 00:05:42.951 "transport_tos": 0, 00:05:42.951 "nvme_error_stat": false, 00:05:42.951 "rdma_srq_size": 0, 00:05:42.951 "io_path_stat": false, 00:05:42.951 "allow_accel_sequence": false, 00:05:42.951 "rdma_max_cq_size": 0, 00:05:42.951 "rdma_cm_event_timeout_ms": 0, 00:05:42.951 "dhchap_digests": [ 00:05:42.951 "sha256", 00:05:42.951 "sha384", 00:05:42.951 "sha512" 00:05:42.951 ], 00:05:42.951 "dhchap_dhgroups": [ 00:05:42.951 "null", 00:05:42.951 "ffdhe2048", 00:05:42.951 "ffdhe3072", 00:05:42.951 "ffdhe4096", 00:05:42.951 "ffdhe6144", 00:05:42.951 "ffdhe8192" 00:05:42.951 ] 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "bdev_nvme_set_hotplug", 00:05:42.951 "params": { 00:05:42.951 "period_us": 100000, 00:05:42.951 "enable": false 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "bdev_wait_for_examine" 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "scsi", 00:05:42.951 "config": null 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "scheduler", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "framework_set_scheduler", 00:05:42.951 "params": { 00:05:42.951 "name": "static" 00:05:42.951 } 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "vhost_scsi", 00:05:42.951 "config": [] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "vhost_blk", 00:05:42.951 "config": [] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "ublk", 00:05:42.951 "config": [] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "nbd", 00:05:42.951 "config": [] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "nvmf", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "nvmf_set_config", 00:05:42.951 "params": { 00:05:42.951 "discovery_filter": "match_any", 00:05:42.951 "admin_cmd_passthru": { 00:05:42.951 "identify_ctrlr": false 00:05:42.951 } 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "nvmf_set_max_subsystems", 00:05:42.951 "params": { 00:05:42.951 "max_subsystems": 1024 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "nvmf_set_crdt", 00:05:42.951 "params": { 00:05:42.951 "crdt1": 0, 00:05:42.951 "crdt2": 0, 00:05:42.951 "crdt3": 0 00:05:42.951 } 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "method": "nvmf_create_transport", 00:05:42.951 "params": { 00:05:42.951 "trtype": "TCP", 00:05:42.951 "max_queue_depth": 128, 00:05:42.951 "max_io_qpairs_per_ctrlr": 127, 00:05:42.951 "in_capsule_data_size": 4096, 00:05:42.951 "max_io_size": 131072, 00:05:42.951 "io_unit_size": 131072, 00:05:42.951 "max_aq_depth": 128, 00:05:42.951 "num_shared_buffers": 511, 00:05:42.951 "buf_cache_size": 4294967295, 00:05:42.951 "dif_insert_or_strip": false, 00:05:42.951 "zcopy": false, 00:05:42.951 "c2h_success": true, 00:05:42.951 "sock_priority": 0, 00:05:42.951 "abort_timeout_sec": 1, 00:05:42.951 "ack_timeout": 0, 00:05:42.951 "data_wr_pool_size": 0 00:05:42.951 } 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 }, 00:05:42.951 { 00:05:42.951 "subsystem": "iscsi", 00:05:42.951 "config": [ 00:05:42.951 { 00:05:42.951 "method": "iscsi_set_options", 00:05:42.951 "params": { 00:05:42.951 "node_base": "iqn.2016-06.io.spdk", 00:05:42.951 "max_sessions": 128, 00:05:42.951 "max_connections_per_session": 2, 00:05:42.951 "max_queue_depth": 64, 00:05:42.951 "default_time2wait": 2, 00:05:42.951 "default_time2retain": 20, 00:05:42.951 "first_burst_length": 8192, 00:05:42.951 "immediate_data": true, 00:05:42.951 "allow_duplicated_isid": false, 00:05:42.951 "error_recovery_level": 0, 00:05:42.951 "nop_timeout": 60, 00:05:42.951 "nop_in_interval": 30, 00:05:42.951 "disable_chap": false, 00:05:42.951 "require_chap": false, 00:05:42.951 "mutual_chap": false, 00:05:42.951 "chap_group": 0, 00:05:42.951 "max_large_datain_per_connection": 64, 00:05:42.951 "max_r2t_per_connection": 4, 00:05:42.951 "pdu_pool_size": 36864, 00:05:42.951 "immediate_data_pool_size": 16384, 00:05:42.951 "data_out_pool_size": 2048 00:05:42.951 } 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 } 00:05:42.951 ] 00:05:42.951 } 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 772588 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 772588 ']' 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 772588 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772588 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772588' 00:05:42.951 killing process with pid 772588 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 772588 00:05:42.951 21:12:17 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 772588 00:05:43.519 21:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=772727 00:05:43.519 21:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:43.519 21:12:18 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 772727 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 772727 ']' 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 772727 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 772727 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 772727' 00:05:48.786 killing process with pid 772727 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 772727 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 772727 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:48.786 00:05:48.786 real 0m6.491s 00:05:48.786 user 0m6.081s 00:05:48.786 sys 0m0.720s 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 END TEST skip_rpc_with_json 00:05:48.786 ************************************ 00:05:48.786 21:12:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:48.786 21:12:23 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:48.786 21:12:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:48.786 21:12:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:48.786 21:12:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:48.786 ************************************ 00:05:48.786 START TEST skip_rpc_with_delay 00:05:48.786 ************************************ 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:48.786 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:05:49.048 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:49.048 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.048 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.048 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.048 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.048 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:49.049 [2024-07-11 21:12:23.609547] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:49.049 [2024-07-11 21:12:23.609662] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.049 00:05:49.049 real 0m0.068s 00:05:49.049 user 0m0.043s 00:05:49.049 sys 0m0.025s 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.049 21:12:23 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:49.049 ************************************ 00:05:49.049 END TEST skip_rpc_with_delay 00:05:49.049 ************************************ 00:05:49.049 21:12:23 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:49.049 21:12:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:49.049 21:12:23 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:49.049 21:12:23 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:49.049 21:12:23 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.049 21:12:23 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.049 21:12:23 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:49.049 ************************************ 00:05:49.049 START TEST exit_on_failed_rpc_init 00:05:49.049 ************************************ 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=773435 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 773435 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 773435 ']' 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.049 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.049 21:12:23 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:49.049 [2024-07-11 21:12:23.724502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:49.049 [2024-07-11 21:12:23.724600] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773435 ] 00:05:49.049 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.049 [2024-07-11 21:12:23.783088] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.331 [2024-07-11 21:12:23.875618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:49.602 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:49.602 [2024-07-11 21:12:24.180627] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:49.602 [2024-07-11 21:12:24.180718] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773454 ] 00:05:49.602 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.602 [2024-07-11 21:12:24.242977] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.602 [2024-07-11 21:12:24.338126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.602 [2024-07-11 21:12:24.338237] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:49.602 [2024-07-11 21:12:24.338258] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:49.602 [2024-07-11 21:12:24.338272] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 773435 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 773435 ']' 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 773435 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 773435 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 773435' 00:05:49.862 killing process with pid 773435 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 773435 00:05:49.862 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 773435 00:05:50.125 00:05:50.125 real 0m1.197s 00:05:50.125 user 0m1.302s 00:05:50.125 sys 0m0.462s 00:05:50.125 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.125 21:12:24 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:50.125 ************************************ 00:05:50.125 END TEST exit_on_failed_rpc_init 00:05:50.125 ************************************ 00:05:50.125 21:12:24 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:50.125 21:12:24 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:50.125 00:05:50.125 real 0m13.442s 00:05:50.125 user 0m12.654s 00:05:50.125 sys 0m1.692s 00:05:50.125 21:12:24 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.125 21:12:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:50.125 ************************************ 00:05:50.125 END TEST skip_rpc 00:05:50.125 ************************************ 00:05:50.384 21:12:24 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.384 21:12:24 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:50.384 21:12:24 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.384 21:12:24 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.384 21:12:24 -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 ************************************ 00:05:50.384 START TEST rpc_client 00:05:50.384 ************************************ 00:05:50.384 21:12:24 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:50.384 * Looking for test storage... 00:05:50.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:50.384 21:12:24 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:50.384 OK 00:05:50.384 21:12:25 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:50.384 00:05:50.384 real 0m0.062s 00:05:50.384 user 0m0.026s 00:05:50.384 sys 0m0.041s 00:05:50.384 21:12:25 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:50.384 21:12:25 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 ************************************ 00:05:50.384 END TEST rpc_client 00:05:50.384 ************************************ 00:05:50.384 21:12:25 -- common/autotest_common.sh@1142 -- # return 0 00:05:50.384 21:12:25 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.384 21:12:25 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:50.384 21:12:25 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:50.384 21:12:25 -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 ************************************ 00:05:50.384 START TEST json_config 00:05:50.384 ************************************ 00:05:50.384 21:12:25 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:50.384 21:12:25 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:50.384 21:12:25 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:50.384 21:12:25 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:50.384 21:12:25 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 21:12:25 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 21:12:25 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 21:12:25 json_config -- paths/export.sh@5 -- # export PATH 00:05:50.384 21:12:25 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@47 -- # : 0 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:50.384 21:12:25 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:05:50.384 INFO: JSON configuration test init 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:05:50.384 21:12:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.384 21:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:05:50.384 21:12:25 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:50.384 21:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.384 21:12:25 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:05:50.384 21:12:25 json_config -- json_config/common.sh@9 -- # local app=target 00:05:50.384 21:12:25 json_config -- json_config/common.sh@10 -- # shift 00:05:50.384 21:12:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:50.384 21:12:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:50.384 21:12:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:50.384 21:12:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.384 21:12:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:50.384 21:12:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=773698 00:05:50.384 21:12:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:50.385 21:12:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:50.385 Waiting for target to run... 00:05:50.385 21:12:25 json_config -- json_config/common.sh@25 -- # waitforlisten 773698 /var/tmp/spdk_tgt.sock 00:05:50.385 21:12:25 json_config -- common/autotest_common.sh@829 -- # '[' -z 773698 ']' 00:05:50.385 21:12:25 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:50.385 21:12:25 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:50.385 21:12:25 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:50.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:50.385 21:12:25 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:50.385 21:12:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:50.385 [2024-07-11 21:12:25.150424] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:50.385 [2024-07-11 21:12:25.150507] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid773698 ] 00:05:50.643 EAL: No free 2048 kB hugepages reported on node 1 00:05:50.902 [2024-07-11 21:12:25.492171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.902 [2024-07-11 21:12:25.555692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:51.472 21:12:26 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:51.472 21:12:26 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:51.472 21:12:26 json_config -- json_config/common.sh@26 -- # echo '' 00:05:51.472 00:05:51.472 21:12:26 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:05:51.472 21:12:26 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:05:51.472 21:12:26 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:51.472 21:12:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.472 21:12:26 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:05:51.472 21:12:26 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:05:51.472 21:12:26 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:51.472 21:12:26 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:51.472 21:12:26 json_config -- json_config/json_config.sh@273 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:51.472 21:12:26 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:05:51.472 21:12:26 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:54.761 21:12:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:54.761 21:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:54.761 21:12:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:54.761 21:12:29 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:05:55.020 21:12:29 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:55.020 21:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@55 -- # return 0 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@278 -- # [[ 0 -eq 1 ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@290 -- # [[ 1 -eq 1 ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@291 -- # create_nvmf_subsystem_config 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@230 -- # timing_enter create_nvmf_subsystem_config 00:05:55.020 21:12:29 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:55.020 21:12:29 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@232 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@233 -- # [[ tcp == \r\d\m\a ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@237 -- # [[ -z 127.0.0.1 ]] 00:05:55.020 21:12:29 json_config -- json_config/json_config.sh@242 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:55.020 21:12:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:55.278 MallocForNvmf0 00:05:55.278 21:12:29 json_config -- json_config/json_config.sh@243 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:55.278 21:12:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:55.535 MallocForNvmf1 00:05:55.535 21:12:30 json_config -- json_config/json_config.sh@245 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:55.535 21:12:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:55.793 [2024-07-11 21:12:30.309236] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:55.793 21:12:30 json_config -- json_config/json_config.sh@246 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:55.793 21:12:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:56.051 21:12:30 json_config -- json_config/json_config.sh@247 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:56.051 21:12:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:56.051 21:12:30 json_config -- json_config/json_config.sh@248 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:56.051 21:12:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:56.310 21:12:31 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:56.310 21:12:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:56.567 [2024-07-11 21:12:31.284481] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:56.567 21:12:31 json_config -- json_config/json_config.sh@251 -- # timing_exit create_nvmf_subsystem_config 00:05:56.567 21:12:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.567 21:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.567 21:12:31 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:05:56.567 21:12:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.567 21:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:56.826 21:12:31 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:05:56.826 21:12:31 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:56.826 21:12:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:56.826 MallocBdevForConfigChangeCheck 00:05:56.826 21:12:31 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:05:56.826 21:12:31 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:56.826 21:12:31 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:57.084 21:12:31 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:05:57.084 21:12:31 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:57.342 21:12:31 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:05:57.342 INFO: shutting down applications... 00:05:57.342 21:12:31 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:05:57.342 21:12:31 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:05:57.342 21:12:31 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:05:57.342 21:12:31 json_config -- json_config/json_config.sh@333 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:59.248 Calling clear_iscsi_subsystem 00:05:59.248 Calling clear_nvmf_subsystem 00:05:59.248 Calling clear_nbd_subsystem 00:05:59.248 Calling clear_ublk_subsystem 00:05:59.248 Calling clear_vhost_blk_subsystem 00:05:59.248 Calling clear_vhost_scsi_subsystem 00:05:59.248 Calling clear_bdev_subsystem 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@337 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@343 -- # count=100 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@345 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@345 -- # break 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:05:59.248 21:12:33 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:05:59.248 21:12:33 json_config -- json_config/common.sh@31 -- # local app=target 00:05:59.248 21:12:33 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:59.248 21:12:33 json_config -- json_config/common.sh@35 -- # [[ -n 773698 ]] 00:05:59.248 21:12:34 json_config -- json_config/common.sh@38 -- # kill -SIGINT 773698 00:05:59.248 21:12:34 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:59.248 21:12:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.248 21:12:34 json_config -- json_config/common.sh@41 -- # kill -0 773698 00:05:59.248 21:12:34 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:59.815 21:12:34 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:59.815 21:12:34 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:59.815 21:12:34 json_config -- json_config/common.sh@41 -- # kill -0 773698 00:05:59.815 21:12:34 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:59.815 21:12:34 json_config -- json_config/common.sh@43 -- # break 00:05:59.815 21:12:34 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:59.815 21:12:34 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:59.815 SPDK target shutdown done 00:05:59.815 21:12:34 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:05:59.815 INFO: relaunching applications... 00:05:59.815 21:12:34 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.815 21:12:34 json_config -- json_config/common.sh@9 -- # local app=target 00:05:59.815 21:12:34 json_config -- json_config/common.sh@10 -- # shift 00:05:59.815 21:12:34 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:59.815 21:12:34 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:59.815 21:12:34 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:59.815 21:12:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.815 21:12:34 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:59.815 21:12:34 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=775005 00:05:59.815 21:12:34 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:59.815 21:12:34 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:59.815 Waiting for target to run... 00:05:59.815 21:12:34 json_config -- json_config/common.sh@25 -- # waitforlisten 775005 /var/tmp/spdk_tgt.sock 00:05:59.815 21:12:34 json_config -- common/autotest_common.sh@829 -- # '[' -z 775005 ']' 00:05:59.815 21:12:34 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:59.815 21:12:34 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:59.815 21:12:34 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:59.815 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:59.815 21:12:34 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:59.815 21:12:34 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:59.815 [2024-07-11 21:12:34.562513] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:05:59.815 [2024-07-11 21:12:34.562611] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775005 ] 00:06:00.073 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.333 [2024-07-11 21:12:35.093948] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.591 [2024-07-11 21:12:35.174908] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.885 [2024-07-11 21:12:38.206575] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:03.885 [2024-07-11 21:12:38.239029] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:06:04.452 21:12:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:04.452 21:12:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:06:04.452 21:12:38 json_config -- json_config/common.sh@26 -- # echo '' 00:06:04.452 00:06:04.452 21:12:38 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:06:04.453 21:12:38 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:06:04.453 INFO: Checking if target configuration is the same... 00:06:04.453 21:12:38 json_config -- json_config/json_config.sh@378 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.453 21:12:38 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:06:04.453 21:12:38 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.453 + '[' 2 -ne 2 ']' 00:06:04.453 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:04.453 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:04.453 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:04.453 +++ basename /dev/fd/62 00:06:04.453 ++ mktemp /tmp/62.XXX 00:06:04.453 + tmp_file_1=/tmp/62.zUw 00:06:04.453 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.453 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.453 + tmp_file_2=/tmp/spdk_tgt_config.json.sUH 00:06:04.453 + ret=0 00:06:04.453 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.711 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:04.711 + diff -u /tmp/62.zUw /tmp/spdk_tgt_config.json.sUH 00:06:04.711 + echo 'INFO: JSON config files are the same' 00:06:04.711 INFO: JSON config files are the same 00:06:04.711 + rm /tmp/62.zUw /tmp/spdk_tgt_config.json.sUH 00:06:04.711 + exit 0 00:06:04.711 21:12:39 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:06:04.711 21:12:39 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:06:04.711 INFO: changing configuration and checking if this can be detected... 00:06:04.711 21:12:39 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:04.711 21:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:06:04.968 21:12:39 json_config -- json_config/json_config.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.969 21:12:39 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:06:04.969 21:12:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:06:04.969 + '[' 2 -ne 2 ']' 00:06:04.969 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:06:04.969 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:06:04.969 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:06:04.969 +++ basename /dev/fd/62 00:06:04.969 ++ mktemp /tmp/62.XXX 00:06:04.969 + tmp_file_1=/tmp/62.v3C 00:06:04.969 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:04.969 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:06:04.969 + tmp_file_2=/tmp/spdk_tgt_config.json.hLa 00:06:04.969 + ret=0 00:06:04.969 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.536 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:06:05.536 + diff -u /tmp/62.v3C /tmp/spdk_tgt_config.json.hLa 00:06:05.536 + ret=1 00:06:05.536 + echo '=== Start of file: /tmp/62.v3C ===' 00:06:05.536 + cat /tmp/62.v3C 00:06:05.536 + echo '=== End of file: /tmp/62.v3C ===' 00:06:05.536 + echo '' 00:06:05.536 + echo '=== Start of file: /tmp/spdk_tgt_config.json.hLa ===' 00:06:05.536 + cat /tmp/spdk_tgt_config.json.hLa 00:06:05.536 + echo '=== End of file: /tmp/spdk_tgt_config.json.hLa ===' 00:06:05.536 + echo '' 00:06:05.536 + rm /tmp/62.v3C /tmp/spdk_tgt_config.json.hLa 00:06:05.536 + exit 1 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:06:05.536 INFO: configuration change detected. 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@317 -- # [[ -n 775005 ]] 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@186 -- # [[ 0 -eq 1 ]] 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@193 -- # uname -s 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@193 -- # [[ Linux = Linux ]] 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@194 -- # rm -f /sample_aio 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:05.536 21:12:40 json_config -- json_config/json_config.sh@323 -- # killprocess 775005 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@948 -- # '[' -z 775005 ']' 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@952 -- # kill -0 775005 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@953 -- # uname 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 775005 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 775005' 00:06:05.536 killing process with pid 775005 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@967 -- # kill 775005 00:06:05.536 21:12:40 json_config -- common/autotest_common.sh@972 -- # wait 775005 00:06:07.440 21:12:41 json_config -- json_config/json_config.sh@326 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:06:07.440 21:12:41 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:06:07.440 21:12:41 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:07.440 21:12:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.440 21:12:41 json_config -- json_config/json_config.sh@328 -- # return 0 00:06:07.440 21:12:41 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:06:07.440 INFO: Success 00:06:07.440 00:06:07.440 real 0m16.744s 00:06:07.440 user 0m18.722s 00:06:07.440 sys 0m2.063s 00:06:07.440 21:12:41 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.440 21:12:41 json_config -- common/autotest_common.sh@10 -- # set +x 00:06:07.440 ************************************ 00:06:07.440 END TEST json_config 00:06:07.440 ************************************ 00:06:07.440 21:12:41 -- common/autotest_common.sh@1142 -- # return 0 00:06:07.441 21:12:41 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:07.441 21:12:41 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:07.441 21:12:41 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.441 21:12:41 -- common/autotest_common.sh@10 -- # set +x 00:06:07.441 ************************************ 00:06:07.441 START TEST json_config_extra_key 00:06:07.441 ************************************ 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:07.441 21:12:41 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:07.441 21:12:41 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:07.441 21:12:41 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:07.441 21:12:41 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.441 21:12:41 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.441 21:12:41 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.441 21:12:41 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:06:07.441 21:12:41 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:07.441 21:12:41 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:06:07.441 INFO: launching applications... 00:06:07.441 21:12:41 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=775921 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:06:07.441 Waiting for target to run... 00:06:07.441 21:12:41 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 775921 /var/tmp/spdk_tgt.sock 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 775921 ']' 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:06:07.441 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:07.441 21:12:41 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:07.441 [2024-07-11 21:12:41.939138] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:07.441 [2024-07-11 21:12:41.939229] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid775921 ] 00:06:07.441 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.701 [2024-07-11 21:12:42.440750] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.959 [2024-07-11 21:12:42.522829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.217 21:12:42 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:08.217 21:12:42 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:06:08.217 00:06:08.217 21:12:42 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:06:08.217 INFO: shutting down applications... 00:06:08.217 21:12:42 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 775921 ]] 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 775921 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 775921 00:06:08.217 21:12:42 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 775921 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@43 -- # break 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:06:08.782 21:12:43 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:06:08.782 SPDK target shutdown done 00:06:08.782 21:12:43 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:06:08.782 Success 00:06:08.782 00:06:08.782 real 0m1.552s 00:06:08.782 user 0m1.358s 00:06:08.782 sys 0m0.572s 00:06:08.782 21:12:43 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.782 21:12:43 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:06:08.782 ************************************ 00:06:08.782 END TEST json_config_extra_key 00:06:08.782 ************************************ 00:06:08.782 21:12:43 -- common/autotest_common.sh@1142 -- # return 0 00:06:08.782 21:12:43 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.782 21:12:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:08.782 21:12:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.782 21:12:43 -- common/autotest_common.sh@10 -- # set +x 00:06:08.782 ************************************ 00:06:08.782 START TEST alias_rpc 00:06:08.782 ************************************ 00:06:08.782 21:12:43 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:06:08.782 * Looking for test storage... 00:06:08.782 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:06:08.782 21:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:08.782 21:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=776236 00:06:08.782 21:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:08.782 21:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 776236 00:06:08.782 21:12:43 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 776236 ']' 00:06:08.783 21:12:43 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:08.783 21:12:43 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:08.783 21:12:43 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:08.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:08.783 21:12:43 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:08.783 21:12:43 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:08.783 [2024-07-11 21:12:43.535943] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:08.783 [2024-07-11 21:12:43.536038] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776236 ] 00:06:09.041 EAL: No free 2048 kB hugepages reported on node 1 00:06:09.041 [2024-07-11 21:12:43.594570] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:09.041 [2024-07-11 21:12:43.678742] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:09.307 21:12:43 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:09.307 21:12:43 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:09.307 21:12:43 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:06:09.596 21:12:44 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 776236 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 776236 ']' 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 776236 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 776236 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 776236' 00:06:09.596 killing process with pid 776236 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@967 -- # kill 776236 00:06:09.596 21:12:44 alias_rpc -- common/autotest_common.sh@972 -- # wait 776236 00:06:09.854 00:06:09.854 real 0m1.186s 00:06:09.854 user 0m1.291s 00:06:09.854 sys 0m0.413s 00:06:09.854 21:12:44 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:09.854 21:12:44 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:09.854 ************************************ 00:06:09.854 END TEST alias_rpc 00:06:09.854 ************************************ 00:06:10.112 21:12:44 -- common/autotest_common.sh@1142 -- # return 0 00:06:10.112 21:12:44 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:06:10.112 21:12:44 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:10.112 21:12:44 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:10.112 21:12:44 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.112 21:12:44 -- common/autotest_common.sh@10 -- # set +x 00:06:10.112 ************************************ 00:06:10.112 START TEST spdkcli_tcp 00:06:10.112 ************************************ 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:06:10.112 * Looking for test storage... 00:06:10.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=776423 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:06:10.112 21:12:44 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 776423 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 776423 ']' 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:10.112 21:12:44 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.112 [2024-07-11 21:12:44.774084] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:10.112 [2024-07-11 21:12:44.774170] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776423 ] 00:06:10.112 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.112 [2024-07-11 21:12:44.831520] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.372 [2024-07-11 21:12:44.916891] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.372 [2024-07-11 21:12:44.916895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.630 21:12:45 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:10.630 21:12:45 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:06:10.630 21:12:45 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=776427 00:06:10.630 21:12:45 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:06:10.630 21:12:45 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:06:10.888 [ 00:06:10.888 "bdev_malloc_delete", 00:06:10.888 "bdev_malloc_create", 00:06:10.888 "bdev_null_resize", 00:06:10.888 "bdev_null_delete", 00:06:10.889 "bdev_null_create", 00:06:10.889 "bdev_nvme_cuse_unregister", 00:06:10.889 "bdev_nvme_cuse_register", 00:06:10.889 "bdev_opal_new_user", 00:06:10.889 "bdev_opal_set_lock_state", 00:06:10.889 "bdev_opal_delete", 00:06:10.889 "bdev_opal_get_info", 00:06:10.889 "bdev_opal_create", 00:06:10.889 "bdev_nvme_opal_revert", 00:06:10.889 "bdev_nvme_opal_init", 00:06:10.889 "bdev_nvme_send_cmd", 00:06:10.889 "bdev_nvme_get_path_iostat", 00:06:10.889 "bdev_nvme_get_mdns_discovery_info", 00:06:10.889 "bdev_nvme_stop_mdns_discovery", 00:06:10.889 "bdev_nvme_start_mdns_discovery", 00:06:10.889 "bdev_nvme_set_multipath_policy", 00:06:10.889 "bdev_nvme_set_preferred_path", 00:06:10.889 "bdev_nvme_get_io_paths", 00:06:10.889 "bdev_nvme_remove_error_injection", 00:06:10.889 "bdev_nvme_add_error_injection", 00:06:10.889 "bdev_nvme_get_discovery_info", 00:06:10.889 "bdev_nvme_stop_discovery", 00:06:10.889 "bdev_nvme_start_discovery", 00:06:10.889 "bdev_nvme_get_controller_health_info", 00:06:10.889 "bdev_nvme_disable_controller", 00:06:10.889 "bdev_nvme_enable_controller", 00:06:10.889 "bdev_nvme_reset_controller", 00:06:10.889 "bdev_nvme_get_transport_statistics", 00:06:10.889 "bdev_nvme_apply_firmware", 00:06:10.889 "bdev_nvme_detach_controller", 00:06:10.889 "bdev_nvme_get_controllers", 00:06:10.889 "bdev_nvme_attach_controller", 00:06:10.889 "bdev_nvme_set_hotplug", 00:06:10.889 "bdev_nvme_set_options", 00:06:10.889 "bdev_passthru_delete", 00:06:10.889 "bdev_passthru_create", 00:06:10.889 "bdev_lvol_set_parent_bdev", 00:06:10.889 "bdev_lvol_set_parent", 00:06:10.889 "bdev_lvol_check_shallow_copy", 00:06:10.889 "bdev_lvol_start_shallow_copy", 00:06:10.889 "bdev_lvol_grow_lvstore", 00:06:10.889 "bdev_lvol_get_lvols", 00:06:10.889 "bdev_lvol_get_lvstores", 00:06:10.889 "bdev_lvol_delete", 00:06:10.889 "bdev_lvol_set_read_only", 00:06:10.889 "bdev_lvol_resize", 00:06:10.889 "bdev_lvol_decouple_parent", 00:06:10.889 "bdev_lvol_inflate", 00:06:10.889 "bdev_lvol_rename", 00:06:10.889 "bdev_lvol_clone_bdev", 00:06:10.889 "bdev_lvol_clone", 00:06:10.889 "bdev_lvol_snapshot", 00:06:10.889 "bdev_lvol_create", 00:06:10.889 "bdev_lvol_delete_lvstore", 00:06:10.889 "bdev_lvol_rename_lvstore", 00:06:10.889 "bdev_lvol_create_lvstore", 00:06:10.889 "bdev_raid_set_options", 00:06:10.889 "bdev_raid_remove_base_bdev", 00:06:10.889 "bdev_raid_add_base_bdev", 00:06:10.889 "bdev_raid_delete", 00:06:10.889 "bdev_raid_create", 00:06:10.889 "bdev_raid_get_bdevs", 00:06:10.889 "bdev_error_inject_error", 00:06:10.889 "bdev_error_delete", 00:06:10.889 "bdev_error_create", 00:06:10.889 "bdev_split_delete", 00:06:10.889 "bdev_split_create", 00:06:10.889 "bdev_delay_delete", 00:06:10.889 "bdev_delay_create", 00:06:10.889 "bdev_delay_update_latency", 00:06:10.889 "bdev_zone_block_delete", 00:06:10.889 "bdev_zone_block_create", 00:06:10.889 "blobfs_create", 00:06:10.889 "blobfs_detect", 00:06:10.889 "blobfs_set_cache_size", 00:06:10.889 "bdev_aio_delete", 00:06:10.889 "bdev_aio_rescan", 00:06:10.889 "bdev_aio_create", 00:06:10.889 "bdev_ftl_set_property", 00:06:10.889 "bdev_ftl_get_properties", 00:06:10.889 "bdev_ftl_get_stats", 00:06:10.889 "bdev_ftl_unmap", 00:06:10.889 "bdev_ftl_unload", 00:06:10.889 "bdev_ftl_delete", 00:06:10.889 "bdev_ftl_load", 00:06:10.889 "bdev_ftl_create", 00:06:10.889 "bdev_virtio_attach_controller", 00:06:10.889 "bdev_virtio_scsi_get_devices", 00:06:10.889 "bdev_virtio_detach_controller", 00:06:10.889 "bdev_virtio_blk_set_hotplug", 00:06:10.889 "bdev_iscsi_delete", 00:06:10.889 "bdev_iscsi_create", 00:06:10.889 "bdev_iscsi_set_options", 00:06:10.889 "accel_error_inject_error", 00:06:10.889 "ioat_scan_accel_module", 00:06:10.889 "dsa_scan_accel_module", 00:06:10.889 "iaa_scan_accel_module", 00:06:10.889 "vfu_virtio_create_scsi_endpoint", 00:06:10.889 "vfu_virtio_scsi_remove_target", 00:06:10.889 "vfu_virtio_scsi_add_target", 00:06:10.889 "vfu_virtio_create_blk_endpoint", 00:06:10.889 "vfu_virtio_delete_endpoint", 00:06:10.889 "keyring_file_remove_key", 00:06:10.889 "keyring_file_add_key", 00:06:10.889 "keyring_linux_set_options", 00:06:10.889 "iscsi_get_histogram", 00:06:10.889 "iscsi_enable_histogram", 00:06:10.889 "iscsi_set_options", 00:06:10.889 "iscsi_get_auth_groups", 00:06:10.889 "iscsi_auth_group_remove_secret", 00:06:10.889 "iscsi_auth_group_add_secret", 00:06:10.889 "iscsi_delete_auth_group", 00:06:10.889 "iscsi_create_auth_group", 00:06:10.889 "iscsi_set_discovery_auth", 00:06:10.889 "iscsi_get_options", 00:06:10.889 "iscsi_target_node_request_logout", 00:06:10.889 "iscsi_target_node_set_redirect", 00:06:10.889 "iscsi_target_node_set_auth", 00:06:10.889 "iscsi_target_node_add_lun", 00:06:10.889 "iscsi_get_stats", 00:06:10.889 "iscsi_get_connections", 00:06:10.889 "iscsi_portal_group_set_auth", 00:06:10.889 "iscsi_start_portal_group", 00:06:10.889 "iscsi_delete_portal_group", 00:06:10.889 "iscsi_create_portal_group", 00:06:10.889 "iscsi_get_portal_groups", 00:06:10.889 "iscsi_delete_target_node", 00:06:10.889 "iscsi_target_node_remove_pg_ig_maps", 00:06:10.889 "iscsi_target_node_add_pg_ig_maps", 00:06:10.889 "iscsi_create_target_node", 00:06:10.889 "iscsi_get_target_nodes", 00:06:10.889 "iscsi_delete_initiator_group", 00:06:10.889 "iscsi_initiator_group_remove_initiators", 00:06:10.889 "iscsi_initiator_group_add_initiators", 00:06:10.889 "iscsi_create_initiator_group", 00:06:10.889 "iscsi_get_initiator_groups", 00:06:10.889 "nvmf_set_crdt", 00:06:10.889 "nvmf_set_config", 00:06:10.889 "nvmf_set_max_subsystems", 00:06:10.889 "nvmf_stop_mdns_prr", 00:06:10.889 "nvmf_publish_mdns_prr", 00:06:10.889 "nvmf_subsystem_get_listeners", 00:06:10.889 "nvmf_subsystem_get_qpairs", 00:06:10.889 "nvmf_subsystem_get_controllers", 00:06:10.889 "nvmf_get_stats", 00:06:10.889 "nvmf_get_transports", 00:06:10.889 "nvmf_create_transport", 00:06:10.889 "nvmf_get_targets", 00:06:10.889 "nvmf_delete_target", 00:06:10.889 "nvmf_create_target", 00:06:10.889 "nvmf_subsystem_allow_any_host", 00:06:10.889 "nvmf_subsystem_remove_host", 00:06:10.889 "nvmf_subsystem_add_host", 00:06:10.889 "nvmf_ns_remove_host", 00:06:10.889 "nvmf_ns_add_host", 00:06:10.889 "nvmf_subsystem_remove_ns", 00:06:10.889 "nvmf_subsystem_add_ns", 00:06:10.889 "nvmf_subsystem_listener_set_ana_state", 00:06:10.889 "nvmf_discovery_get_referrals", 00:06:10.889 "nvmf_discovery_remove_referral", 00:06:10.889 "nvmf_discovery_add_referral", 00:06:10.889 "nvmf_subsystem_remove_listener", 00:06:10.889 "nvmf_subsystem_add_listener", 00:06:10.889 "nvmf_delete_subsystem", 00:06:10.889 "nvmf_create_subsystem", 00:06:10.889 "nvmf_get_subsystems", 00:06:10.889 "env_dpdk_get_mem_stats", 00:06:10.889 "nbd_get_disks", 00:06:10.889 "nbd_stop_disk", 00:06:10.889 "nbd_start_disk", 00:06:10.889 "ublk_recover_disk", 00:06:10.889 "ublk_get_disks", 00:06:10.889 "ublk_stop_disk", 00:06:10.889 "ublk_start_disk", 00:06:10.889 "ublk_destroy_target", 00:06:10.889 "ublk_create_target", 00:06:10.889 "virtio_blk_create_transport", 00:06:10.889 "virtio_blk_get_transports", 00:06:10.889 "vhost_controller_set_coalescing", 00:06:10.889 "vhost_get_controllers", 00:06:10.889 "vhost_delete_controller", 00:06:10.889 "vhost_create_blk_controller", 00:06:10.889 "vhost_scsi_controller_remove_target", 00:06:10.889 "vhost_scsi_controller_add_target", 00:06:10.889 "vhost_start_scsi_controller", 00:06:10.889 "vhost_create_scsi_controller", 00:06:10.889 "thread_set_cpumask", 00:06:10.889 "framework_get_governor", 00:06:10.889 "framework_get_scheduler", 00:06:10.889 "framework_set_scheduler", 00:06:10.889 "framework_get_reactors", 00:06:10.889 "thread_get_io_channels", 00:06:10.889 "thread_get_pollers", 00:06:10.889 "thread_get_stats", 00:06:10.889 "framework_monitor_context_switch", 00:06:10.889 "spdk_kill_instance", 00:06:10.889 "log_enable_timestamps", 00:06:10.889 "log_get_flags", 00:06:10.889 "log_clear_flag", 00:06:10.889 "log_set_flag", 00:06:10.889 "log_get_level", 00:06:10.889 "log_set_level", 00:06:10.889 "log_get_print_level", 00:06:10.889 "log_set_print_level", 00:06:10.889 "framework_enable_cpumask_locks", 00:06:10.889 "framework_disable_cpumask_locks", 00:06:10.889 "framework_wait_init", 00:06:10.889 "framework_start_init", 00:06:10.889 "scsi_get_devices", 00:06:10.889 "bdev_get_histogram", 00:06:10.889 "bdev_enable_histogram", 00:06:10.889 "bdev_set_qos_limit", 00:06:10.889 "bdev_set_qd_sampling_period", 00:06:10.889 "bdev_get_bdevs", 00:06:10.889 "bdev_reset_iostat", 00:06:10.889 "bdev_get_iostat", 00:06:10.889 "bdev_examine", 00:06:10.889 "bdev_wait_for_examine", 00:06:10.889 "bdev_set_options", 00:06:10.889 "notify_get_notifications", 00:06:10.889 "notify_get_types", 00:06:10.889 "accel_get_stats", 00:06:10.889 "accel_set_options", 00:06:10.889 "accel_set_driver", 00:06:10.889 "accel_crypto_key_destroy", 00:06:10.889 "accel_crypto_keys_get", 00:06:10.889 "accel_crypto_key_create", 00:06:10.889 "accel_assign_opc", 00:06:10.889 "accel_get_module_info", 00:06:10.889 "accel_get_opc_assignments", 00:06:10.889 "vmd_rescan", 00:06:10.889 "vmd_remove_device", 00:06:10.889 "vmd_enable", 00:06:10.889 "sock_get_default_impl", 00:06:10.889 "sock_set_default_impl", 00:06:10.889 "sock_impl_set_options", 00:06:10.889 "sock_impl_get_options", 00:06:10.889 "iobuf_get_stats", 00:06:10.889 "iobuf_set_options", 00:06:10.889 "keyring_get_keys", 00:06:10.889 "framework_get_pci_devices", 00:06:10.889 "framework_get_config", 00:06:10.889 "framework_get_subsystems", 00:06:10.889 "vfu_tgt_set_base_path", 00:06:10.889 "trace_get_info", 00:06:10.889 "trace_get_tpoint_group_mask", 00:06:10.889 "trace_disable_tpoint_group", 00:06:10.889 "trace_enable_tpoint_group", 00:06:10.889 "trace_clear_tpoint_mask", 00:06:10.889 "trace_set_tpoint_mask", 00:06:10.889 "spdk_get_version", 00:06:10.889 "rpc_get_methods" 00:06:10.889 ] 00:06:10.889 21:12:45 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:10.890 21:12:45 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:06:10.890 21:12:45 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 776423 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 776423 ']' 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 776423 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 776423 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 776423' 00:06:10.890 killing process with pid 776423 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 776423 00:06:10.890 21:12:45 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 776423 00:06:11.148 00:06:11.148 real 0m1.196s 00:06:11.148 user 0m2.093s 00:06:11.148 sys 0m0.457s 00:06:11.148 21:12:45 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.148 21:12:45 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:11.148 ************************************ 00:06:11.148 END TEST spdkcli_tcp 00:06:11.148 ************************************ 00:06:11.148 21:12:45 -- common/autotest_common.sh@1142 -- # return 0 00:06:11.148 21:12:45 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.148 21:12:45 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:11.148 21:12:45 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.148 21:12:45 -- common/autotest_common.sh@10 -- # set +x 00:06:11.148 ************************************ 00:06:11.148 START TEST dpdk_mem_utility 00:06:11.148 ************************************ 00:06:11.148 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:06:11.406 * Looking for test storage... 00:06:11.406 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:06:11.406 21:12:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:11.406 21:12:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=776625 00:06:11.406 21:12:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:06:11.406 21:12:45 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 776625 00:06:11.406 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 776625 ']' 00:06:11.406 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.406 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:11.406 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.406 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:11.406 21:12:45 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.406 [2024-07-11 21:12:46.015319] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:11.406 [2024-07-11 21:12:46.015413] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776625 ] 00:06:11.406 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.406 [2024-07-11 21:12:46.072987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.406 [2024-07-11 21:12:46.156620] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.666 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:11.666 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:06:11.666 21:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:06:11.666 21:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:06:11.666 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:11.666 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:11.666 { 00:06:11.666 "filename": "/tmp/spdk_mem_dump.txt" 00:06:11.666 } 00:06:11.666 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:11.666 21:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:06:11.927 DPDK memory size 814.000000 MiB in 1 heap(s) 00:06:11.927 1 heaps totaling size 814.000000 MiB 00:06:11.927 size: 814.000000 MiB heap id: 0 00:06:11.927 end heaps---------- 00:06:11.927 8 mempools totaling size 598.116089 MiB 00:06:11.927 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:06:11.927 size: 158.602051 MiB name: PDU_data_out_Pool 00:06:11.927 size: 84.521057 MiB name: bdev_io_776625 00:06:11.927 size: 51.011292 MiB name: evtpool_776625 00:06:11.927 size: 50.003479 MiB name: msgpool_776625 00:06:11.927 size: 21.763794 MiB name: PDU_Pool 00:06:11.927 size: 19.513306 MiB name: SCSI_TASK_Pool 00:06:11.927 size: 0.026123 MiB name: Session_Pool 00:06:11.927 end mempools------- 00:06:11.927 6 memzones totaling size 4.142822 MiB 00:06:11.927 size: 1.000366 MiB name: RG_ring_0_776625 00:06:11.927 size: 1.000366 MiB name: RG_ring_1_776625 00:06:11.927 size: 1.000366 MiB name: RG_ring_4_776625 00:06:11.927 size: 1.000366 MiB name: RG_ring_5_776625 00:06:11.927 size: 0.125366 MiB name: RG_ring_2_776625 00:06:11.927 size: 0.015991 MiB name: RG_ring_3_776625 00:06:11.927 end memzones------- 00:06:11.927 21:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:06:11.927 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:06:11.927 list of free elements. size: 12.519348 MiB 00:06:11.927 element at address: 0x200000400000 with size: 1.999512 MiB 00:06:11.927 element at address: 0x200018e00000 with size: 0.999878 MiB 00:06:11.927 element at address: 0x200019000000 with size: 0.999878 MiB 00:06:11.927 element at address: 0x200003e00000 with size: 0.996277 MiB 00:06:11.927 element at address: 0x200031c00000 with size: 0.994446 MiB 00:06:11.927 element at address: 0x200013800000 with size: 0.978699 MiB 00:06:11.927 element at address: 0x200007000000 with size: 0.959839 MiB 00:06:11.927 element at address: 0x200019200000 with size: 0.936584 MiB 00:06:11.927 element at address: 0x200000200000 with size: 0.841614 MiB 00:06:11.927 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:06:11.927 element at address: 0x20000b200000 with size: 0.490723 MiB 00:06:11.927 element at address: 0x200000800000 with size: 0.487793 MiB 00:06:11.927 element at address: 0x200019400000 with size: 0.485657 MiB 00:06:11.927 element at address: 0x200027e00000 with size: 0.410034 MiB 00:06:11.927 element at address: 0x200003a00000 with size: 0.355530 MiB 00:06:11.927 list of standard malloc elements. size: 199.218079 MiB 00:06:11.927 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:06:11.927 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:06:11.927 element at address: 0x200018efff80 with size: 1.000122 MiB 00:06:11.927 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:06:11.927 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:06:11.927 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:06:11.927 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:06:11.927 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:06:11.927 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:06:11.927 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003adb300 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003adb500 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003affa80 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003affb40 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:06:11.927 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:06:11.927 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200027e69040 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:06:11.927 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:06:11.927 list of memzone associated elements. size: 602.262573 MiB 00:06:11.927 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:06:11.927 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:06:11.927 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:06:11.927 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:06:11.927 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:06:11.927 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_776625_0 00:06:11.927 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:06:11.927 associated memzone info: size: 48.002930 MiB name: MP_evtpool_776625_0 00:06:11.927 element at address: 0x200003fff380 with size: 48.003052 MiB 00:06:11.927 associated memzone info: size: 48.002930 MiB name: MP_msgpool_776625_0 00:06:11.927 element at address: 0x2000195be940 with size: 20.255554 MiB 00:06:11.927 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:06:11.927 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:06:11.927 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:06:11.927 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:06:11.927 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_776625 00:06:11.927 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:06:11.927 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_776625 00:06:11.927 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:06:11.927 associated memzone info: size: 1.007996 MiB name: MP_evtpool_776625 00:06:11.927 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:06:11.927 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:06:11.927 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:06:11.927 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:06:11.927 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:06:11.927 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:06:11.927 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:06:11.927 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:06:11.927 element at address: 0x200003eff180 with size: 1.000488 MiB 00:06:11.927 associated memzone info: size: 1.000366 MiB name: RG_ring_0_776625 00:06:11.927 element at address: 0x200003affc00 with size: 1.000488 MiB 00:06:11.927 associated memzone info: size: 1.000366 MiB name: RG_ring_1_776625 00:06:11.927 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:06:11.927 associated memzone info: size: 1.000366 MiB name: RG_ring_4_776625 00:06:11.927 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:06:11.927 associated memzone info: size: 1.000366 MiB name: RG_ring_5_776625 00:06:11.927 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:06:11.927 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_776625 00:06:11.927 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:06:11.927 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:06:11.927 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:06:11.927 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:06:11.927 element at address: 0x20001947c540 with size: 0.250488 MiB 00:06:11.927 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:06:11.927 element at address: 0x200003adf880 with size: 0.125488 MiB 00:06:11.927 associated memzone info: size: 0.125366 MiB name: RG_ring_2_776625 00:06:11.927 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:06:11.927 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:06:11.927 element at address: 0x200027e69100 with size: 0.023743 MiB 00:06:11.927 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:06:11.927 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:06:11.927 associated memzone info: size: 0.015991 MiB name: RG_ring_3_776625 00:06:11.927 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:06:11.927 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:06:11.927 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:06:11.927 associated memzone info: size: 0.000183 MiB name: MP_msgpool_776625 00:06:11.927 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:06:11.927 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_776625 00:06:11.927 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:06:11.927 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:06:11.927 21:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:06:11.927 21:12:46 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 776625 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 776625 ']' 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 776625 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 776625 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 776625' 00:06:11.928 killing process with pid 776625 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 776625 00:06:11.928 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 776625 00:06:12.187 00:06:12.187 real 0m1.021s 00:06:12.187 user 0m0.995s 00:06:12.187 sys 0m0.399s 00:06:12.187 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.187 21:12:46 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:06:12.187 ************************************ 00:06:12.187 END TEST dpdk_mem_utility 00:06:12.187 ************************************ 00:06:12.445 21:12:46 -- common/autotest_common.sh@1142 -- # return 0 00:06:12.445 21:12:46 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:12.445 21:12:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:12.445 21:12:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.445 21:12:46 -- common/autotest_common.sh@10 -- # set +x 00:06:12.445 ************************************ 00:06:12.445 START TEST event 00:06:12.445 ************************************ 00:06:12.445 21:12:46 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:06:12.445 * Looking for test storage... 00:06:12.445 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:12.445 21:12:47 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:06:12.445 21:12:47 event -- bdev/nbd_common.sh@6 -- # set -e 00:06:12.445 21:12:47 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.445 21:12:47 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:12.445 21:12:47 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.445 21:12:47 event -- common/autotest_common.sh@10 -- # set +x 00:06:12.445 ************************************ 00:06:12.445 START TEST event_perf 00:06:12.445 ************************************ 00:06:12.445 21:12:47 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:06:12.445 Running I/O for 1 seconds...[2024-07-11 21:12:47.082511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:12.445 [2024-07-11 21:12:47.082575] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776813 ] 00:06:12.445 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.445 [2024-07-11 21:12:47.144769] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:12.704 [2024-07-11 21:12:47.237895] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.704 [2024-07-11 21:12:47.237947] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:12.704 [2024-07-11 21:12:47.238065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:12.704 [2024-07-11 21:12:47.238068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.640 Running I/O for 1 seconds... 00:06:13.640 lcore 0: 240312 00:06:13.640 lcore 1: 240311 00:06:13.640 lcore 2: 240312 00:06:13.640 lcore 3: 240311 00:06:13.640 done. 00:06:13.640 00:06:13.640 real 0m1.251s 00:06:13.640 user 0m4.167s 00:06:13.640 sys 0m0.079s 00:06:13.640 21:12:48 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:13.640 21:12:48 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:06:13.640 ************************************ 00:06:13.640 END TEST event_perf 00:06:13.640 ************************************ 00:06:13.640 21:12:48 event -- common/autotest_common.sh@1142 -- # return 0 00:06:13.640 21:12:48 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:13.640 21:12:48 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:13.640 21:12:48 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:13.640 21:12:48 event -- common/autotest_common.sh@10 -- # set +x 00:06:13.640 ************************************ 00:06:13.640 START TEST event_reactor 00:06:13.640 ************************************ 00:06:13.640 21:12:48 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:06:13.640 [2024-07-11 21:12:48.379105] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:13.640 [2024-07-11 21:12:48.379171] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid776974 ] 00:06:13.640 EAL: No free 2048 kB hugepages reported on node 1 00:06:13.900 [2024-07-11 21:12:48.442586] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.900 [2024-07-11 21:12:48.535561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.283 test_start 00:06:15.283 oneshot 00:06:15.283 tick 100 00:06:15.283 tick 100 00:06:15.283 tick 250 00:06:15.283 tick 100 00:06:15.283 tick 100 00:06:15.283 tick 100 00:06:15.283 tick 250 00:06:15.283 tick 500 00:06:15.283 tick 100 00:06:15.283 tick 100 00:06:15.283 tick 250 00:06:15.283 tick 100 00:06:15.283 tick 100 00:06:15.283 test_end 00:06:15.283 00:06:15.283 real 0m1.250s 00:06:15.283 user 0m1.159s 00:06:15.283 sys 0m0.087s 00:06:15.283 21:12:49 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.283 21:12:49 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:06:15.283 ************************************ 00:06:15.283 END TEST event_reactor 00:06:15.284 ************************************ 00:06:15.284 21:12:49 event -- common/autotest_common.sh@1142 -- # return 0 00:06:15.284 21:12:49 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.284 21:12:49 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:15.284 21:12:49 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.284 21:12:49 event -- common/autotest_common.sh@10 -- # set +x 00:06:15.284 ************************************ 00:06:15.284 START TEST event_reactor_perf 00:06:15.284 ************************************ 00:06:15.284 21:12:49 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:06:15.284 [2024-07-11 21:12:49.673281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:15.284 [2024-07-11 21:12:49.673348] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777128 ] 00:06:15.284 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.284 [2024-07-11 21:12:49.735949] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.284 [2024-07-11 21:12:49.829276] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.224 test_start 00:06:16.224 test_end 00:06:16.224 Performance: 358490 events per second 00:06:16.224 00:06:16.224 real 0m1.252s 00:06:16.224 user 0m1.161s 00:06:16.224 sys 0m0.086s 00:06:16.224 21:12:50 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.224 21:12:50 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 ************************************ 00:06:16.224 END TEST event_reactor_perf 00:06:16.224 ************************************ 00:06:16.224 21:12:50 event -- common/autotest_common.sh@1142 -- # return 0 00:06:16.224 21:12:50 event -- event/event.sh@49 -- # uname -s 00:06:16.224 21:12:50 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:06:16.224 21:12:50 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:16.224 21:12:50 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.224 21:12:50 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.224 21:12:50 event -- common/autotest_common.sh@10 -- # set +x 00:06:16.224 ************************************ 00:06:16.224 START TEST event_scheduler 00:06:16.224 ************************************ 00:06:16.224 21:12:50 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:06:16.483 * Looking for test storage... 00:06:16.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:06:16.483 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:06:16.483 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=777310 00:06:16.483 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:06:16.483 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:06:16.483 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 777310 00:06:16.483 21:12:51 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 777310 ']' 00:06:16.484 21:12:51 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:16.484 21:12:51 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:16.484 21:12:51 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:16.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:16.484 21:12:51 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:16.484 21:12:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.484 [2024-07-11 21:12:51.048436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:16.484 [2024-07-11 21:12:51.048547] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777310 ] 00:06:16.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:16.484 [2024-07-11 21:12:51.111535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:16.484 [2024-07-11 21:12:51.201023] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.484 [2024-07-11 21:12:51.201087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:16.484 [2024-07-11 21:12:51.201153] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:16.484 [2024-07-11 21:12:51.201155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:06:16.744 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 [2024-07-11 21:12:51.273976] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:06:16.744 [2024-07-11 21:12:51.274003] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:06:16.744 [2024-07-11 21:12:51.274021] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:06:16.744 [2024-07-11 21:12:51.274032] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:06:16.744 [2024-07-11 21:12:51.274043] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 [2024-07-11 21:12:51.367855] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 ************************************ 00:06:16.744 START TEST scheduler_create_thread 00:06:16.744 ************************************ 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 2 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 3 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 4 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 5 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 6 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 7 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 8 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 9 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 10 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:16.744 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.314 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:17.314 00:06:17.314 real 0m0.592s 00:06:17.314 user 0m0.010s 00:06:17.314 sys 0m0.004s 00:06:17.314 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.314 21:12:51 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:06:17.314 ************************************ 00:06:17.314 END TEST scheduler_create_thread 00:06:17.314 ************************************ 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:06:17.314 21:12:52 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:06:17.314 21:12:52 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 777310 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 777310 ']' 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 777310 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777310 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777310' 00:06:17.314 killing process with pid 777310 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 777310 00:06:17.314 21:12:52 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 777310 00:06:17.884 [2024-07-11 21:12:52.463939] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:06:18.143 00:06:18.143 real 0m1.715s 00:06:18.143 user 0m2.231s 00:06:18.143 sys 0m0.341s 00:06:18.143 21:12:52 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.143 21:12:52 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 ************************************ 00:06:18.143 END TEST event_scheduler 00:06:18.143 ************************************ 00:06:18.143 21:12:52 event -- common/autotest_common.sh@1142 -- # return 0 00:06:18.143 21:12:52 event -- event/event.sh@51 -- # modprobe -n nbd 00:06:18.143 21:12:52 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:06:18.143 21:12:52 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:18.143 21:12:52 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.143 21:12:52 event -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 ************************************ 00:06:18.143 START TEST app_repeat 00:06:18.143 ************************************ 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@19 -- # repeat_pid=777622 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 777622' 00:06:18.143 Process app_repeat pid: 777622 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:06:18.143 spdk_app_start Round 0 00:06:18.143 21:12:52 event.app_repeat -- event/event.sh@25 -- # waitforlisten 777622 /var/tmp/spdk-nbd.sock 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 777622 ']' 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:18.143 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:18.143 21:12:52 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:18.143 [2024-07-11 21:12:52.755144] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:18.143 [2024-07-11 21:12:52.755208] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid777622 ] 00:06:18.143 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.143 [2024-07-11 21:12:52.817959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:18.143 [2024-07-11 21:12:52.908090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:18.143 [2024-07-11 21:12:52.908096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.401 21:12:53 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:18.401 21:12:53 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:18.401 21:12:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.659 Malloc0 00:06:18.659 21:12:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:18.916 Malloc1 00:06:18.916 21:12:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:18.916 21:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:19.174 /dev/nbd0 00:06:19.174 21:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:19.174 21:12:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.174 1+0 records in 00:06:19.174 1+0 records out 00:06:19.174 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000193572 s, 21.2 MB/s 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.174 21:12:53 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.174 21:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.174 21:12:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.174 21:12:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:19.432 /dev/nbd1 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:19.432 1+0 records in 00:06:19.432 1+0 records out 00:06:19.432 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000198427 s, 20.6 MB/s 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:19.432 21:12:54 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.432 21:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:19.690 21:12:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:19.690 { 00:06:19.690 "nbd_device": "/dev/nbd0", 00:06:19.690 "bdev_name": "Malloc0" 00:06:19.690 }, 00:06:19.690 { 00:06:19.690 "nbd_device": "/dev/nbd1", 00:06:19.691 "bdev_name": "Malloc1" 00:06:19.691 } 00:06:19.691 ]' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:19.691 { 00:06:19.691 "nbd_device": "/dev/nbd0", 00:06:19.691 "bdev_name": "Malloc0" 00:06:19.691 }, 00:06:19.691 { 00:06:19.691 "nbd_device": "/dev/nbd1", 00:06:19.691 "bdev_name": "Malloc1" 00:06:19.691 } 00:06:19.691 ]' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:19.691 /dev/nbd1' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:19.691 /dev/nbd1' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:19.691 256+0 records in 00:06:19.691 256+0 records out 00:06:19.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00389574 s, 269 MB/s 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:19.691 256+0 records in 00:06:19.691 256+0 records out 00:06:19.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0207218 s, 50.6 MB/s 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:19.691 256+0 records in 00:06:19.691 256+0 records out 00:06:19.691 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0237803 s, 44.1 MB/s 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:19.691 21:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:20.258 21:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:20.258 21:12:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:20.258 21:12:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:20.258 21:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.258 21:12:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.259 21:12:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:20.259 21:12:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.259 21:12:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.259 21:12:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:20.259 21:12:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:20.259 21:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:20.516 21:12:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:20.516 21:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:20.516 21:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:20.774 21:12:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:20.774 21:12:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:21.034 21:12:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:21.292 [2024-07-11 21:12:55.806209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:21.292 [2024-07-11 21:12:55.895943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.292 [2024-07-11 21:12:55.895943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.292 [2024-07-11 21:12:55.957372] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:21.292 [2024-07-11 21:12:55.957452] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:23.828 21:12:58 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:23.828 21:12:58 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:06:23.828 spdk_app_start Round 1 00:06:23.828 21:12:58 event.app_repeat -- event/event.sh@25 -- # waitforlisten 777622 /var/tmp/spdk-nbd.sock 00:06:23.828 21:12:58 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 777622 ']' 00:06:23.828 21:12:58 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:23.828 21:12:58 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:23.828 21:12:58 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:23.828 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:23.828 21:12:58 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:23.828 21:12:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:24.087 21:12:58 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:24.087 21:12:58 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:24.087 21:12:58 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.345 Malloc0 00:06:24.345 21:12:59 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:24.604 Malloc1 00:06:24.604 21:12:59 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:24.604 21:12:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:24.862 /dev/nbd0 00:06:24.862 21:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:24.862 21:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:24.862 21:12:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:24.862 21:12:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:24.862 21:12:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:24.862 21:12:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:24.863 21:12:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.120 1+0 records in 00:06:25.120 1+0 records out 00:06:25.120 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000190658 s, 21.5 MB/s 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.120 21:12:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.121 21:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.121 21:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.121 21:12:59 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:25.121 /dev/nbd1 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:25.378 1+0 records in 00:06:25.378 1+0 records out 00:06:25.378 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000191204 s, 21.4 MB/s 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:25.378 21:12:59 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.378 21:12:59 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:25.665 { 00:06:25.665 "nbd_device": "/dev/nbd0", 00:06:25.665 "bdev_name": "Malloc0" 00:06:25.665 }, 00:06:25.665 { 00:06:25.665 "nbd_device": "/dev/nbd1", 00:06:25.665 "bdev_name": "Malloc1" 00:06:25.665 } 00:06:25.665 ]' 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:25.665 { 00:06:25.665 "nbd_device": "/dev/nbd0", 00:06:25.665 "bdev_name": "Malloc0" 00:06:25.665 }, 00:06:25.665 { 00:06:25.665 "nbd_device": "/dev/nbd1", 00:06:25.665 "bdev_name": "Malloc1" 00:06:25.665 } 00:06:25.665 ]' 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:25.665 /dev/nbd1' 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:25.665 /dev/nbd1' 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:25.665 21:13:00 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:25.666 256+0 records in 00:06:25.666 256+0 records out 00:06:25.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0037877 s, 277 MB/s 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:25.666 256+0 records in 00:06:25.666 256+0 records out 00:06:25.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0235763 s, 44.5 MB/s 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:25.666 256+0 records in 00:06:25.666 256+0 records out 00:06:25.666 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0225423 s, 46.5 MB/s 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.666 21:13:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:25.924 21:13:00 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:26.183 21:13:00 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:26.441 21:13:01 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:26.441 21:13:01 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:26.699 21:13:01 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:26.957 [2024-07-11 21:13:01.649594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:27.216 [2024-07-11 21:13:01.740252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.216 [2024-07-11 21:13:01.740256] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.216 [2024-07-11 21:13:01.799615] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:27.216 [2024-07-11 21:13:01.799687] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:29.750 21:13:04 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:29.750 21:13:04 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:29.750 spdk_app_start Round 2 00:06:29.750 21:13:04 event.app_repeat -- event/event.sh@25 -- # waitforlisten 777622 /var/tmp/spdk-nbd.sock 00:06:29.750 21:13:04 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 777622 ']' 00:06:29.750 21:13:04 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:29.750 21:13:04 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:29.750 21:13:04 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:29.750 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:29.750 21:13:04 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:29.750 21:13:04 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:30.007 21:13:04 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:30.008 21:13:04 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:30.008 21:13:04 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.265 Malloc0 00:06:30.265 21:13:04 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:30.524 Malloc1 00:06:30.524 21:13:05 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.524 21:13:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:30.782 /dev/nbd0 00:06:30.782 21:13:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:30.782 21:13:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:30.782 1+0 records in 00:06:30.782 1+0 records out 00:06:30.782 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000222045 s, 18.4 MB/s 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:30.782 21:13:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:30.782 21:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:30.782 21:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:30.782 21:13:05 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:31.040 /dev/nbd1 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:31.040 1+0 records in 00:06:31.040 1+0 records out 00:06:31.040 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000178066 s, 23.0 MB/s 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:06:31.040 21:13:05 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.040 21:13:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:31.298 21:13:05 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:31.298 { 00:06:31.298 "nbd_device": "/dev/nbd0", 00:06:31.298 "bdev_name": "Malloc0" 00:06:31.298 }, 00:06:31.298 { 00:06:31.298 "nbd_device": "/dev/nbd1", 00:06:31.298 "bdev_name": "Malloc1" 00:06:31.298 } 00:06:31.298 ]' 00:06:31.298 21:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:31.298 { 00:06:31.298 "nbd_device": "/dev/nbd0", 00:06:31.298 "bdev_name": "Malloc0" 00:06:31.298 }, 00:06:31.298 { 00:06:31.298 "nbd_device": "/dev/nbd1", 00:06:31.298 "bdev_name": "Malloc1" 00:06:31.298 } 00:06:31.298 ]' 00:06:31.298 21:13:05 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:31.298 /dev/nbd1' 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:31.298 /dev/nbd1' 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:31.298 256+0 records in 00:06:31.298 256+0 records out 00:06:31.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00502662 s, 209 MB/s 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:31.298 256+0 records in 00:06:31.298 256+0 records out 00:06:31.298 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0202456 s, 51.8 MB/s 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:31.298 21:13:06 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:31.556 256+0 records in 00:06:31.556 256+0 records out 00:06:31.556 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0250446 s, 41.9 MB/s 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.556 21:13:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:31.816 21:13:06 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:32.074 21:13:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:32.331 21:13:06 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:32.331 21:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:32.331 21:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:32.331 21:13:06 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:32.332 21:13:06 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:32.332 21:13:06 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:32.590 21:13:07 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:32.850 [2024-07-11 21:13:07.435591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:32.850 [2024-07-11 21:13:07.525479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.850 [2024-07-11 21:13:07.525482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.850 [2024-07-11 21:13:07.585162] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:32.850 [2024-07-11 21:13:07.585240] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:36.133 21:13:10 event.app_repeat -- event/event.sh@38 -- # waitforlisten 777622 /var/tmp/spdk-nbd.sock 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 777622 ']' 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:36.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:06:36.133 21:13:10 event.app_repeat -- event/event.sh@39 -- # killprocess 777622 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 777622 ']' 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 777622 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 777622 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 777622' 00:06:36.133 killing process with pid 777622 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@967 -- # kill 777622 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@972 -- # wait 777622 00:06:36.133 spdk_app_start is called in Round 0. 00:06:36.133 Shutdown signal received, stop current app iteration 00:06:36.133 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:36.133 spdk_app_start is called in Round 1. 00:06:36.133 Shutdown signal received, stop current app iteration 00:06:36.133 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:36.133 spdk_app_start is called in Round 2. 00:06:36.133 Shutdown signal received, stop current app iteration 00:06:36.133 Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 reinitialization... 00:06:36.133 spdk_app_start is called in Round 3. 00:06:36.133 Shutdown signal received, stop current app iteration 00:06:36.133 21:13:10 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:36.133 21:13:10 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:36.133 00:06:36.133 real 0m17.958s 00:06:36.133 user 0m39.108s 00:06:36.133 sys 0m3.220s 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.133 21:13:10 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:36.133 ************************************ 00:06:36.133 END TEST app_repeat 00:06:36.133 ************************************ 00:06:36.133 21:13:10 event -- common/autotest_common.sh@1142 -- # return 0 00:06:36.133 21:13:10 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:36.133 21:13:10 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.133 21:13:10 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.133 21:13:10 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.133 21:13:10 event -- common/autotest_common.sh@10 -- # set +x 00:06:36.133 ************************************ 00:06:36.133 START TEST cpu_locks 00:06:36.133 ************************************ 00:06:36.133 21:13:10 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:36.133 * Looking for test storage... 00:06:36.133 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:36.133 21:13:10 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:36.133 21:13:10 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:36.133 21:13:10 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:36.133 21:13:10 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:36.133 21:13:10 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.133 21:13:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.133 21:13:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.133 ************************************ 00:06:36.133 START TEST default_locks 00:06:36.133 ************************************ 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=779970 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 779970 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 779970 ']' 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:36.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:36.133 21:13:10 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:36.133 [2024-07-11 21:13:10.864281] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:36.133 [2024-07-11 21:13:10.864372] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid779970 ] 00:06:36.133 EAL: No free 2048 kB hugepages reported on node 1 00:06:36.389 [2024-07-11 21:13:10.928805] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:36.389 [2024-07-11 21:13:11.021139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:36.648 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:36.648 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:06:36.649 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 779970 00:06:36.649 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 779970 00:06:36.649 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:36.908 lslocks: write error 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 779970 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 779970 ']' 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 779970 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 779970 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 779970' 00:06:36.908 killing process with pid 779970 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 779970 00:06:36.908 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 779970 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 779970 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 779970 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 779970 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 779970 ']' 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.490 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (779970) - No such process 00:06:37.490 ERROR: process (pid: 779970) is no longer running 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.490 00:06:37.490 real 0m1.161s 00:06:37.490 user 0m1.116s 00:06:37.490 sys 0m0.531s 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:37.490 21:13:11 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.490 ************************************ 00:06:37.490 END TEST default_locks 00:06:37.490 ************************************ 00:06:37.490 21:13:11 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:37.490 21:13:11 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:37.490 21:13:11 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:37.490 21:13:11 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:37.490 21:13:11 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:37.490 ************************************ 00:06:37.490 START TEST default_locks_via_rpc 00:06:37.490 ************************************ 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=780134 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 780134 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 780134 ']' 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:37.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:37.490 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.490 [2024-07-11 21:13:12.064893] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:37.490 [2024-07-11 21:13:12.064993] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780134 ] 00:06:37.490 EAL: No free 2048 kB hugepages reported on node 1 00:06:37.490 [2024-07-11 21:13:12.137884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.490 [2024-07-11 21:13:12.238921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.749 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:37.749 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:37.749 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:37.749 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.749 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 780134 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 780134 00:06:37.750 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:38.008 21:13:12 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 780134 00:06:38.008 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 780134 ']' 00:06:38.008 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 780134 00:06:38.008 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:06:38.008 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:38.008 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780134 00:06:38.266 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:38.266 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:38.266 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780134' 00:06:38.266 killing process with pid 780134 00:06:38.266 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 780134 00:06:38.266 21:13:12 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 780134 00:06:38.523 00:06:38.523 real 0m1.184s 00:06:38.523 user 0m1.218s 00:06:38.523 sys 0m0.526s 00:06:38.523 21:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:38.523 21:13:13 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:38.523 ************************************ 00:06:38.523 END TEST default_locks_via_rpc 00:06:38.523 ************************************ 00:06:38.523 21:13:13 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:38.523 21:13:13 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:38.523 21:13:13 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:38.523 21:13:13 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:38.523 21:13:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:38.523 ************************************ 00:06:38.523 START TEST non_locking_app_on_locked_coremask 00:06:38.523 ************************************ 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=780296 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 780296 /var/tmp/spdk.sock 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 780296 ']' 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.523 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:38.523 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:38.780 [2024-07-11 21:13:13.296112] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:38.780 [2024-07-11 21:13:13.296207] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780296 ] 00:06:38.780 EAL: No free 2048 kB hugepages reported on node 1 00:06:38.780 [2024-07-11 21:13:13.359025] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:38.780 [2024-07-11 21:13:13.454452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=780428 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 780428 /var/tmp/spdk2.sock 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 780428 ']' 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:39.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:39.038 21:13:13 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:39.038 [2024-07-11 21:13:13.754287] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:39.038 [2024-07-11 21:13:13.754359] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780428 ] 00:06:39.038 EAL: No free 2048 kB hugepages reported on node 1 00:06:39.297 [2024-07-11 21:13:13.845495] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:39.297 [2024-07-11 21:13:13.845527] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.297 [2024-07-11 21:13:14.029633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:40.232 21:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:40.232 21:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:40.232 21:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 780296 00:06:40.232 21:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 780296 00:06:40.232 21:13:14 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:40.493 lslocks: write error 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 780296 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 780296 ']' 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 780296 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780296 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780296' 00:06:40.493 killing process with pid 780296 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 780296 00:06:40.493 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 780296 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 780428 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 780428 ']' 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 780428 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780428 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780428' 00:06:41.430 killing process with pid 780428 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 780428 00:06:41.430 21:13:15 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 780428 00:06:41.689 00:06:41.689 real 0m3.156s 00:06:41.689 user 0m3.327s 00:06:41.689 sys 0m1.019s 00:06:41.689 21:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:41.689 21:13:16 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.689 ************************************ 00:06:41.689 END TEST non_locking_app_on_locked_coremask 00:06:41.689 ************************************ 00:06:41.689 21:13:16 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:41.689 21:13:16 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:41.689 21:13:16 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:41.689 21:13:16 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:41.689 21:13:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:41.689 ************************************ 00:06:41.689 START TEST locking_app_on_unlocked_coremask 00:06:41.689 ************************************ 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=780728 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 780728 /var/tmp/spdk.sock 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 780728 ']' 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:41.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:41.689 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:41.948 [2024-07-11 21:13:16.501964] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:41.948 [2024-07-11 21:13:16.502055] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780728 ] 00:06:41.948 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.948 [2024-07-11 21:13:16.564477] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:41.948 [2024-07-11 21:13:16.564514] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.948 [2024-07-11 21:13:16.655252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=780861 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 780861 /var/tmp/spdk2.sock 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 780861 ']' 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:42.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.207 21:13:16 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:42.207 [2024-07-11 21:13:16.955060] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:42.207 [2024-07-11 21:13:16.955165] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid780861 ] 00:06:42.467 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.467 [2024-07-11 21:13:17.040459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:42.467 [2024-07-11 21:13:17.223569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.403 21:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:43.403 21:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:43.403 21:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 780861 00:06:43.403 21:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 780861 00:06:43.403 21:13:17 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:43.688 lslocks: write error 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 780728 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 780728 ']' 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 780728 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780728 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780728' 00:06:43.688 killing process with pid 780728 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 780728 00:06:43.688 21:13:18 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 780728 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 780861 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 780861 ']' 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 780861 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 780861 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 780861' 00:06:44.629 killing process with pid 780861 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 780861 00:06:44.629 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 780861 00:06:44.889 00:06:44.889 real 0m3.134s 00:06:44.889 user 0m3.282s 00:06:44.889 sys 0m1.015s 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:44.889 ************************************ 00:06:44.889 END TEST locking_app_on_unlocked_coremask 00:06:44.889 ************************************ 00:06:44.889 21:13:19 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:44.889 21:13:19 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:44.889 21:13:19 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:44.889 21:13:19 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:44.889 21:13:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:44.889 ************************************ 00:06:44.889 START TEST locking_app_on_locked_coremask 00:06:44.889 ************************************ 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=781164 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 781164 /var/tmp/spdk.sock 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 781164 ']' 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:44.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:44.889 21:13:19 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.149 [2024-07-11 21:13:19.688078] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.149 [2024-07-11 21:13:19.688185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781164 ] 00:06:45.149 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.149 [2024-07-11 21:13:19.751617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:45.149 [2024-07-11 21:13:19.840337] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:45.407 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:45.407 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:45.407 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=781175 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 781175 /var/tmp/spdk2.sock 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 781175 /var/tmp/spdk2.sock 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 781175 /var/tmp/spdk2.sock 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 781175 ']' 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:45.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:45.408 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:45.408 [2024-07-11 21:13:20.152569] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:45.408 [2024-07-11 21:13:20.152670] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781175 ] 00:06:45.667 EAL: No free 2048 kB hugepages reported on node 1 00:06:45.667 [2024-07-11 21:13:20.253126] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 781164 has claimed it. 00:06:45.667 [2024-07-11 21:13:20.253192] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:46.235 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (781175) - No such process 00:06:46.235 ERROR: process (pid: 781175) is no longer running 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 781164 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 781164 00:06:46.235 21:13:20 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:46.801 lslocks: write error 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 781164 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 781164 ']' 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 781164 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781164 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781164' 00:06:46.801 killing process with pid 781164 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 781164 00:06:46.801 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 781164 00:06:47.060 00:06:47.060 real 0m2.078s 00:06:47.060 user 0m2.263s 00:06:47.060 sys 0m0.654s 00:06:47.060 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.060 21:13:21 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.060 ************************************ 00:06:47.060 END TEST locking_app_on_locked_coremask 00:06:47.060 ************************************ 00:06:47.060 21:13:21 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:47.060 21:13:21 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:47.060 21:13:21 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:47.060 21:13:21 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.060 21:13:21 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:47.060 ************************************ 00:06:47.060 START TEST locking_overlapped_coremask 00:06:47.060 ************************************ 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=781467 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 781467 /var/tmp/spdk.sock 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 781467 ']' 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:47.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.060 21:13:21 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.060 [2024-07-11 21:13:21.819671] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:47.060 [2024-07-11 21:13:21.819785] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781467 ] 00:06:47.320 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.320 [2024-07-11 21:13:21.884843] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:47.320 [2024-07-11 21:13:21.975348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:47.320 [2024-07-11 21:13:21.975416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:47.320 [2024-07-11 21:13:21.975418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=781474 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 781474 /var/tmp/spdk2.sock 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 781474 /var/tmp/spdk2.sock 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 781474 /var/tmp/spdk2.sock 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 781474 ']' 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:47.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:47.577 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:47.578 [2024-07-11 21:13:22.278234] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:47.578 [2024-07-11 21:13:22.278331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781474 ] 00:06:47.578 EAL: No free 2048 kB hugepages reported on node 1 00:06:47.835 [2024-07-11 21:13:22.366234] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 781467 has claimed it. 00:06:47.835 [2024-07-11 21:13:22.366301] app.c: 901:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:48.403 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (781474) - No such process 00:06:48.403 ERROR: process (pid: 781474) is no longer running 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 781467 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 781467 ']' 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 781467 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:48.403 21:13:22 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781467 00:06:48.403 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:48.403 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:48.403 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781467' 00:06:48.403 killing process with pid 781467 00:06:48.403 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 781467 00:06:48.403 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 781467 00:06:48.661 00:06:48.661 real 0m1.659s 00:06:48.661 user 0m4.492s 00:06:48.661 sys 0m0.449s 00:06:48.661 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:48.661 21:13:23 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:48.661 ************************************ 00:06:48.661 END TEST locking_overlapped_coremask 00:06:48.661 ************************************ 00:06:48.919 21:13:23 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:48.919 21:13:23 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:48.919 21:13:23 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:48.919 21:13:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:48.919 21:13:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:48.919 ************************************ 00:06:48.919 START TEST locking_overlapped_coremask_via_rpc 00:06:48.919 ************************************ 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=781642 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 781642 /var/tmp/spdk.sock 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 781642 ']' 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:48.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:48.919 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:48.919 [2024-07-11 21:13:23.530151] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:48.919 [2024-07-11 21:13:23.530247] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781642 ] 00:06:48.919 EAL: No free 2048 kB hugepages reported on node 1 00:06:48.919 [2024-07-11 21:13:23.596706] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:48.919 [2024-07-11 21:13:23.596767] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.177 [2024-07-11 21:13:23.693122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:49.177 [2024-07-11 21:13:23.693173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:49.177 [2024-07-11 21:13:23.693191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=781773 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 781773 /var/tmp/spdk2.sock 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 781773 ']' 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:49.177 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:49.437 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:49.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:49.437 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:49.437 21:13:23 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:49.437 [2024-07-11 21:13:23.999653] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:49.437 [2024-07-11 21:13:23.999770] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid781773 ] 00:06:49.437 EAL: No free 2048 kB hugepages reported on node 1 00:06:49.437 [2024-07-11 21:13:24.088014] app.c: 905:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:49.437 [2024-07-11 21:13:24.088077] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:49.696 [2024-07-11 21:13:24.263950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:49.696 [2024-07-11 21:13:24.264012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:49.696 [2024-07-11 21:13:24.264014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.262 [2024-07-11 21:13:24.944865] app.c: 770:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 781642 has claimed it. 00:06:50.262 request: 00:06:50.262 { 00:06:50.262 "method": "framework_enable_cpumask_locks", 00:06:50.262 "req_id": 1 00:06:50.262 } 00:06:50.262 Got JSON-RPC error response 00:06:50.262 response: 00:06:50.262 { 00:06:50.262 "code": -32603, 00:06:50.262 "message": "Failed to claim CPU core: 2" 00:06:50.262 } 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 781642 /var/tmp/spdk.sock 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 781642 ']' 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.262 21:13:24 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.519 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.519 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:50.519 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 781773 /var/tmp/spdk2.sock 00:06:50.520 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 781773 ']' 00:06:50.520 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:50.520 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:50.520 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:50.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:50.520 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:50.520 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:50.779 00:06:50.779 real 0m1.966s 00:06:50.779 user 0m1.046s 00:06:50.779 sys 0m0.153s 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.779 21:13:25 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:50.779 ************************************ 00:06:50.779 END TEST locking_overlapped_coremask_via_rpc 00:06:50.779 ************************************ 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:06:50.779 21:13:25 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:50.779 21:13:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 781642 ]] 00:06:50.779 21:13:25 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 781642 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 781642 ']' 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 781642 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781642 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781642' 00:06:50.779 killing process with pid 781642 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 781642 00:06:50.779 21:13:25 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 781642 00:06:51.344 21:13:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 781773 ]] 00:06:51.344 21:13:25 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 781773 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 781773 ']' 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 781773 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 781773 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 781773' 00:06:51.344 killing process with pid 781773 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 781773 00:06:51.344 21:13:25 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 781773 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 781642 ]] 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 781642 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 781642 ']' 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 781642 00:06:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (781642) - No such process 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 781642 is not found' 00:06:51.603 Process with pid 781642 is not found 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 781773 ]] 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 781773 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 781773 ']' 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 781773 00:06:51.603 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (781773) - No such process 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 781773 is not found' 00:06:51.603 Process with pid 781773 is not found 00:06:51.603 21:13:26 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:51.603 00:06:51.603 real 0m15.583s 00:06:51.603 user 0m27.390s 00:06:51.603 sys 0m5.266s 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.603 21:13:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 END TEST cpu_locks 00:06:51.603 ************************************ 00:06:51.603 21:13:26 event -- common/autotest_common.sh@1142 -- # return 0 00:06:51.603 00:06:51.603 real 0m39.357s 00:06:51.603 user 1m15.341s 00:06:51.603 sys 0m9.324s 00:06:51.603 21:13:26 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:51.603 21:13:26 event -- common/autotest_common.sh@10 -- # set +x 00:06:51.603 ************************************ 00:06:51.603 END TEST event 00:06:51.603 ************************************ 00:06:51.603 21:13:26 -- common/autotest_common.sh@1142 -- # return 0 00:06:51.603 21:13:26 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:51.603 21:13:26 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:51.603 21:13:26 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.603 21:13:26 -- common/autotest_common.sh@10 -- # set +x 00:06:51.863 ************************************ 00:06:51.863 START TEST thread 00:06:51.863 ************************************ 00:06:51.863 21:13:26 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:51.863 * Looking for test storage... 00:06:51.863 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:51.863 21:13:26 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.863 21:13:26 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:51.863 21:13:26 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:51.863 21:13:26 thread -- common/autotest_common.sh@10 -- # set +x 00:06:51.863 ************************************ 00:06:51.863 START TEST thread_poller_perf 00:06:51.863 ************************************ 00:06:51.863 21:13:26 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:51.863 [2024-07-11 21:13:26.482995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:51.863 [2024-07-11 21:13:26.483070] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782136 ] 00:06:51.863 EAL: No free 2048 kB hugepages reported on node 1 00:06:51.863 [2024-07-11 21:13:26.542262] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:51.863 [2024-07-11 21:13:26.630446] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.863 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:53.243 ====================================== 00:06:53.243 busy:2714980102 (cyc) 00:06:53.243 total_run_count: 301000 00:06:53.243 tsc_hz: 2700000000 (cyc) 00:06:53.243 ====================================== 00:06:53.243 poller_cost: 9019 (cyc), 3340 (nsec) 00:06:53.243 00:06:53.243 real 0m1.254s 00:06:53.243 user 0m1.166s 00:06:53.243 sys 0m0.083s 00:06:53.243 21:13:27 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:53.243 21:13:27 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 ************************************ 00:06:53.243 END TEST thread_poller_perf 00:06:53.243 ************************************ 00:06:53.243 21:13:27 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:53.243 21:13:27 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.243 21:13:27 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:53.243 21:13:27 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:53.243 21:13:27 thread -- common/autotest_common.sh@10 -- # set +x 00:06:53.243 ************************************ 00:06:53.243 START TEST thread_poller_perf 00:06:53.243 ************************************ 00:06:53.243 21:13:27 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:53.243 [2024-07-11 21:13:27.786938] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:53.243 [2024-07-11 21:13:27.787002] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782294 ] 00:06:53.243 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.243 [2024-07-11 21:13:27.849975] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:53.243 [2024-07-11 21:13:27.941306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:53.243 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:54.621 ====================================== 00:06:54.621 busy:2703101278 (cyc) 00:06:54.621 total_run_count: 3859000 00:06:54.621 tsc_hz: 2700000000 (cyc) 00:06:54.621 ====================================== 00:06:54.621 poller_cost: 700 (cyc), 259 (nsec) 00:06:54.621 00:06:54.621 real 0m1.251s 00:06:54.621 user 0m1.154s 00:06:54.621 sys 0m0.091s 00:06:54.621 21:13:29 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.621 21:13:29 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:54.621 ************************************ 00:06:54.621 END TEST thread_poller_perf 00:06:54.621 ************************************ 00:06:54.621 21:13:29 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:54.621 21:13:29 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:54.621 00:06:54.621 real 0m2.655s 00:06:54.621 user 0m2.384s 00:06:54.621 sys 0m0.271s 00:06:54.621 21:13:29 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:54.621 21:13:29 thread -- common/autotest_common.sh@10 -- # set +x 00:06:54.621 ************************************ 00:06:54.621 END TEST thread 00:06:54.621 ************************************ 00:06:54.621 21:13:29 -- common/autotest_common.sh@1142 -- # return 0 00:06:54.621 21:13:29 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:54.621 21:13:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:54.621 21:13:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:54.621 21:13:29 -- common/autotest_common.sh@10 -- # set +x 00:06:54.621 ************************************ 00:06:54.621 START TEST accel 00:06:54.621 ************************************ 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:54.621 * Looking for test storage... 00:06:54.621 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:54.621 21:13:29 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:54.621 21:13:29 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:54.621 21:13:29 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:54.621 21:13:29 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=782502 00:06:54.621 21:13:29 accel -- accel/accel.sh@63 -- # waitforlisten 782502 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@829 -- # '[' -z 782502 ']' 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:54.621 21:13:29 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:54.621 21:13:29 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:54.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:54.621 21:13:29 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:54.621 21:13:29 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:54.621 21:13:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.621 21:13:29 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:54.621 21:13:29 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:54.621 21:13:29 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:54.621 21:13:29 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:54.621 21:13:29 accel -- accel/accel.sh@41 -- # jq -r . 00:06:54.621 [2024-07-11 21:13:29.203852] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:54.621 [2024-07-11 21:13:29.203956] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782502 ] 00:06:54.621 EAL: No free 2048 kB hugepages reported on node 1 00:06:54.621 [2024-07-11 21:13:29.269002] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:54.621 [2024-07-11 21:13:29.360796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:54.879 21:13:29 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.879 21:13:29 accel -- common/autotest_common.sh@862 -- # return 0 00:06:54.879 21:13:29 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:54.879 21:13:29 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:54.879 21:13:29 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:54.879 21:13:29 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:54.879 21:13:29 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:54.879 21:13:29 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:54.879 21:13:29 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:54.879 21:13:29 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:54.879 21:13:29 accel -- common/autotest_common.sh@10 -- # set +x 00:06:54.879 21:13:29 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # IFS== 00:06:55.139 21:13:29 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:55.139 21:13:29 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:55.139 21:13:29 accel -- accel/accel.sh@75 -- # killprocess 782502 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@948 -- # '[' -z 782502 ']' 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@952 -- # kill -0 782502 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@953 -- # uname 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 782502 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 782502' 00:06:55.139 killing process with pid 782502 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@967 -- # kill 782502 00:06:55.139 21:13:29 accel -- common/autotest_common.sh@972 -- # wait 782502 00:06:55.398 21:13:30 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:55.398 21:13:30 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:55.398 21:13:30 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.398 21:13:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.398 21:13:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.398 21:13:30 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:55.398 21:13:30 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:55.398 21:13:30 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.398 21:13:30 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:55.657 21:13:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.657 21:13:30 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:55.657 21:13:30 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:55.657 21:13:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.657 21:13:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.657 ************************************ 00:06:55.657 START TEST accel_missing_filename 00:06:55.657 ************************************ 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.657 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:55.657 21:13:30 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:55.657 [2024-07-11 21:13:30.225067] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.657 [2024-07-11 21:13:30.225130] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782655 ] 00:06:55.657 EAL: No free 2048 kB hugepages reported on node 1 00:06:55.657 [2024-07-11 21:13:30.287702] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:55.657 [2024-07-11 21:13:30.379727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:55.917 [2024-07-11 21:13:30.439296] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:55.917 [2024-07-11 21:13:30.517098] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:55.917 A filename is required. 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:55.917 00:06:55.917 real 0m0.386s 00:06:55.917 user 0m0.283s 00:06:55.917 sys 0m0.137s 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.917 21:13:30 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:55.917 ************************************ 00:06:55.917 END TEST accel_missing_filename 00:06:55.917 ************************************ 00:06:55.917 21:13:30 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:55.917 21:13:30 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.917 21:13:30 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:55.917 21:13:30 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.917 21:13:30 accel -- common/autotest_common.sh@10 -- # set +x 00:06:55.917 ************************************ 00:06:55.917 START TEST accel_compress_verify 00:06:55.917 ************************************ 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:55.917 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:55.918 21:13:30 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:55.918 21:13:30 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:55.918 [2024-07-11 21:13:30.652963] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:55.918 [2024-07-11 21:13:30.653023] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782801 ] 00:06:55.918 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.177 [2024-07-11 21:13:30.716720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.178 [2024-07-11 21:13:30.810382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.178 [2024-07-11 21:13:30.872835] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:56.437 [2024-07-11 21:13:30.957369] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:06:56.437 00:06:56.437 Compression does not support the verify option, aborting. 00:06:56.437 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:56.437 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.437 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:56.437 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:56.438 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:56.438 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.438 00:06:56.438 real 0m0.403s 00:06:56.438 user 0m0.296s 00:06:56.438 sys 0m0.139s 00:06:56.438 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.438 21:13:31 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 END TEST accel_compress_verify 00:06:56.438 ************************************ 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.438 21:13:31 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 START TEST accel_wrong_workload 00:06:56.438 ************************************ 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:56.438 21:13:31 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:56.438 Unsupported workload type: foobar 00:06:56.438 [2024-07-11 21:13:31.097664] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:56.438 accel_perf options: 00:06:56.438 [-h help message] 00:06:56.438 [-q queue depth per core] 00:06:56.438 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.438 [-T number of threads per core 00:06:56.438 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.438 [-t time in seconds] 00:06:56.438 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.438 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:56.438 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.438 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.438 [-S for crc32c workload, use this seed value (default 0) 00:06:56.438 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.438 [-f for fill workload, use this BYTE value (default 255) 00:06:56.438 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.438 [-y verify result if this switch is on] 00:06:56.438 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.438 Can be used to spread operations across a wider range of memory. 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.438 00:06:56.438 real 0m0.021s 00:06:56.438 user 0m0.011s 00:06:56.438 sys 0m0.009s 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.438 21:13:31 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 END TEST accel_wrong_workload 00:06:56.438 ************************************ 00:06:56.438 Error: writing output failed: Broken pipe 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.438 21:13:31 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 START TEST accel_negative_buffers 00:06:56.438 ************************************ 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:56.438 21:13:31 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:56.438 -x option must be non-negative. 00:06:56.438 [2024-07-11 21:13:31.171320] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:56.438 accel_perf options: 00:06:56.438 [-h help message] 00:06:56.438 [-q queue depth per core] 00:06:56.438 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:56.438 [-T number of threads per core 00:06:56.438 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:56.438 [-t time in seconds] 00:06:56.438 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:56.438 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:56.438 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:56.438 [-l for compress/decompress workloads, name of uncompressed input file 00:06:56.438 [-S for crc32c workload, use this seed value (default 0) 00:06:56.438 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:56.438 [-f for fill workload, use this BYTE value (default 255) 00:06:56.438 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:56.438 [-y verify result if this switch is on] 00:06:56.438 [-a tasks to allocate per core (default: same value as -q)] 00:06:56.438 Can be used to spread operations across a wider range of memory. 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:56.438 00:06:56.438 real 0m0.024s 00:06:56.438 user 0m0.016s 00:06:56.438 sys 0m0.008s 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:56.438 21:13:31 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:56.438 ************************************ 00:06:56.438 END TEST accel_negative_buffers 00:06:56.438 ************************************ 00:06:56.438 Error: writing output failed: Broken pipe 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:56.438 21:13:31 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:56.438 21:13:31 accel -- common/autotest_common.sh@10 -- # set +x 00:06:56.697 ************************************ 00:06:56.697 START TEST accel_crc32c 00:06:56.697 ************************************ 00:06:56.697 21:13:31 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:56.697 [2024-07-11 21:13:31.231546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:56.697 [2024-07-11 21:13:31.231611] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid782868 ] 00:06:56.697 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.697 [2024-07-11 21:13:31.293177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:56.697 [2024-07-11 21:13:31.386294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:56.697 21:13:31 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:58.077 21:13:32 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:58.077 00:06:58.077 real 0m1.395s 00:06:58.077 user 0m1.258s 00:06:58.077 sys 0m0.139s 00:06:58.077 21:13:32 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:58.077 21:13:32 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:58.077 ************************************ 00:06:58.077 END TEST accel_crc32c 00:06:58.077 ************************************ 00:06:58.077 21:13:32 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:58.077 21:13:32 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:58.077 21:13:32 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:58.077 21:13:32 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:58.077 21:13:32 accel -- common/autotest_common.sh@10 -- # set +x 00:06:58.077 ************************************ 00:06:58.077 START TEST accel_crc32c_C2 00:06:58.077 ************************************ 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:58.077 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:58.077 [2024-07-11 21:13:32.669863] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:58.077 [2024-07-11 21:13:32.669924] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783130 ] 00:06:58.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:58.077 [2024-07-11 21:13:32.734175] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:58.077 [2024-07-11 21:13:32.829982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:58.337 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:58.338 21:13:32 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:59.716 00:06:59.716 real 0m1.403s 00:06:59.716 user 0m1.260s 00:06:59.716 sys 0m0.143s 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.716 21:13:34 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:59.716 ************************************ 00:06:59.716 END TEST accel_crc32c_C2 00:06:59.716 ************************************ 00:06:59.716 21:13:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:59.716 21:13:34 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:59.716 21:13:34 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:59.716 21:13:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.716 21:13:34 accel -- common/autotest_common.sh@10 -- # set +x 00:06:59.716 ************************************ 00:06:59.716 START TEST accel_copy 00:06:59.716 ************************************ 00:06:59.716 21:13:34 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:59.717 [2024-07-11 21:13:34.127442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:06:59.717 [2024-07-11 21:13:34.127508] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783303 ] 00:06:59.717 EAL: No free 2048 kB hugepages reported on node 1 00:06:59.717 [2024-07-11 21:13:34.191118] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:59.717 [2024-07-11 21:13:34.283399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:59.717 21:13:34 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:07:01.159 21:13:35 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:01.159 00:07:01.159 real 0m1.414s 00:07:01.159 user 0m1.265s 00:07:01.159 sys 0m0.149s 00:07:01.159 21:13:35 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:01.159 21:13:35 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:07:01.159 ************************************ 00:07:01.159 END TEST accel_copy 00:07:01.159 ************************************ 00:07:01.159 21:13:35 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:01.159 21:13:35 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.159 21:13:35 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:01.159 21:13:35 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:01.159 21:13:35 accel -- common/autotest_common.sh@10 -- # set +x 00:07:01.159 ************************************ 00:07:01.159 START TEST accel_fill 00:07:01.159 ************************************ 00:07:01.159 21:13:35 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:07:01.160 [2024-07-11 21:13:35.581766] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:01.160 [2024-07-11 21:13:35.581848] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783455 ] 00:07:01.160 EAL: No free 2048 kB hugepages reported on node 1 00:07:01.160 [2024-07-11 21:13:35.643916] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.160 [2024-07-11 21:13:35.736903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:01.160 21:13:35 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:07:02.538 21:13:36 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:02.538 00:07:02.538 real 0m1.404s 00:07:02.538 user 0m1.263s 00:07:02.538 sys 0m0.142s 00:07:02.538 21:13:36 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:02.538 21:13:36 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 ************************************ 00:07:02.538 END TEST accel_fill 00:07:02.538 ************************************ 00:07:02.538 21:13:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:02.538 21:13:36 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:07:02.538 21:13:36 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:02.538 21:13:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:02.538 21:13:36 accel -- common/autotest_common.sh@10 -- # set +x 00:07:02.538 ************************************ 00:07:02.538 START TEST accel_copy_crc32c 00:07:02.538 ************************************ 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:07:02.538 [2024-07-11 21:13:37.031576] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:02.538 [2024-07-11 21:13:37.031641] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783614 ] 00:07:02.538 EAL: No free 2048 kB hugepages reported on node 1 00:07:02.538 [2024-07-11 21:13:37.094904] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:02.538 [2024-07-11 21:13:37.186400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.538 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:02.539 21:13:37 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:03.916 00:07:03.916 real 0m1.401s 00:07:03.916 user 0m1.263s 00:07:03.916 sys 0m0.138s 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:03.916 21:13:38 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:07:03.916 ************************************ 00:07:03.916 END TEST accel_copy_crc32c 00:07:03.916 ************************************ 00:07:03.916 21:13:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:03.916 21:13:38 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.916 21:13:38 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:03.916 21:13:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:03.916 21:13:38 accel -- common/autotest_common.sh@10 -- # set +x 00:07:03.916 ************************************ 00:07:03.916 START TEST accel_copy_crc32c_C2 00:07:03.916 ************************************ 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:07:03.916 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:07:03.916 [2024-07-11 21:13:38.481360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:03.916 [2024-07-11 21:13:38.481428] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid783882 ] 00:07:03.916 EAL: No free 2048 kB hugepages reported on node 1 00:07:03.916 [2024-07-11 21:13:38.540268] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.916 [2024-07-11 21:13:38.631519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:04.176 21:13:38 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:05.113 00:07:05.113 real 0m1.380s 00:07:05.113 user 0m1.249s 00:07:05.113 sys 0m0.132s 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:05.113 21:13:39 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:07:05.113 ************************************ 00:07:05.113 END TEST accel_copy_crc32c_C2 00:07:05.113 ************************************ 00:07:05.113 21:13:39 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:05.113 21:13:39 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:07:05.113 21:13:39 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:05.113 21:13:39 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:05.113 21:13:39 accel -- common/autotest_common.sh@10 -- # set +x 00:07:05.373 ************************************ 00:07:05.373 START TEST accel_dualcast 00:07:05.373 ************************************ 00:07:05.373 21:13:39 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:07:05.373 21:13:39 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:07:05.373 [2024-07-11 21:13:39.911436] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:05.373 [2024-07-11 21:13:39.911505] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784044 ] 00:07:05.373 EAL: No free 2048 kB hugepages reported on node 1 00:07:05.373 [2024-07-11 21:13:39.976794] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:05.373 [2024-07-11 21:13:40.077323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:05.373 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.373 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.373 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.373 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.374 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:05.633 21:13:40 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:07:06.569 21:13:41 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:06.569 00:07:06.569 real 0m1.423s 00:07:06.569 user 0m1.271s 00:07:06.569 sys 0m0.153s 00:07:06.569 21:13:41 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:06.569 21:13:41 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:07:06.569 ************************************ 00:07:06.569 END TEST accel_dualcast 00:07:06.569 ************************************ 00:07:06.569 21:13:41 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:06.569 21:13:41 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:07:06.569 21:13:41 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:06.569 21:13:41 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:06.569 21:13:41 accel -- common/autotest_common.sh@10 -- # set +x 00:07:06.829 ************************************ 00:07:06.829 START TEST accel_compare 00:07:06.829 ************************************ 00:07:06.829 21:13:41 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:07:06.829 [2024-07-11 21:13:41.377102] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:06.829 [2024-07-11 21:13:41.377169] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784200 ] 00:07:06.829 EAL: No free 2048 kB hugepages reported on node 1 00:07:06.829 [2024-07-11 21:13:41.439867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:06.829 [2024-07-11 21:13:41.533046] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:06.829 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:07.089 21:13:41 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:07:08.023 21:13:42 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:08.023 00:07:08.023 real 0m1.410s 00:07:08.023 user 0m1.258s 00:07:08.023 sys 0m0.152s 00:07:08.023 21:13:42 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.023 21:13:42 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:07:08.023 ************************************ 00:07:08.023 END TEST accel_compare 00:07:08.023 ************************************ 00:07:08.023 21:13:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:08.023 21:13:42 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:07:08.023 21:13:42 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:07:08.023 21:13:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.023 21:13:42 accel -- common/autotest_common.sh@10 -- # set +x 00:07:08.283 ************************************ 00:07:08.283 START TEST accel_xor 00:07:08.283 ************************************ 00:07:08.283 21:13:42 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:08.283 21:13:42 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:08.283 [2024-07-11 21:13:42.830886] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:08.283 [2024-07-11 21:13:42.830948] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784366 ] 00:07:08.283 EAL: No free 2048 kB hugepages reported on node 1 00:07:08.283 [2024-07-11 21:13:42.894278] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:08.283 [2024-07-11 21:13:42.986989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.283 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:08.541 21:13:43 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:09.477 21:13:44 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:09.477 00:07:09.477 real 0m1.401s 00:07:09.477 user 0m1.253s 00:07:09.477 sys 0m0.149s 00:07:09.477 21:13:44 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:09.477 21:13:44 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:09.477 ************************************ 00:07:09.477 END TEST accel_xor 00:07:09.477 ************************************ 00:07:09.477 21:13:44 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:09.477 21:13:44 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:07:09.477 21:13:44 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:09.477 21:13:44 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:09.477 21:13:44 accel -- common/autotest_common.sh@10 -- # set +x 00:07:09.737 ************************************ 00:07:09.737 START TEST accel_xor 00:07:09.737 ************************************ 00:07:09.737 21:13:44 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:07:09.737 [2024-07-11 21:13:44.273239] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:09.737 [2024-07-11 21:13:44.273303] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784629 ] 00:07:09.737 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.737 [2024-07-11 21:13:44.335736] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.737 [2024-07-11 21:13:44.425578] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:09.737 21:13:44 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:07:11.116 21:13:45 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:11.116 00:07:11.116 real 0m1.398s 00:07:11.116 user 0m1.263s 00:07:11.116 sys 0m0.136s 00:07:11.116 21:13:45 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.116 21:13:45 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:07:11.116 ************************************ 00:07:11.116 END TEST accel_xor 00:07:11.116 ************************************ 00:07:11.116 21:13:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:11.116 21:13:45 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:07:11.116 21:13:45 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:11.116 21:13:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.116 21:13:45 accel -- common/autotest_common.sh@10 -- # set +x 00:07:11.116 ************************************ 00:07:11.116 START TEST accel_dif_verify 00:07:11.116 ************************************ 00:07:11.116 21:13:45 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:11.116 21:13:45 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:07:11.117 21:13:45 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:07:11.117 [2024-07-11 21:13:45.716958] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:11.117 [2024-07-11 21:13:45.717020] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784786 ] 00:07:11.117 EAL: No free 2048 kB hugepages reported on node 1 00:07:11.117 [2024-07-11 21:13:45.778582] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.117 [2024-07-11 21:13:45.872233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:07:11.376 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:11.377 21:13:45 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:07:12.756 21:13:47 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:12.756 00:07:12.756 real 0m1.402s 00:07:12.756 user 0m1.262s 00:07:12.756 sys 0m0.143s 00:07:12.756 21:13:47 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:12.756 21:13:47 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:07:12.756 ************************************ 00:07:12.756 END TEST accel_dif_verify 00:07:12.756 ************************************ 00:07:12.756 21:13:47 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:12.756 21:13:47 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:07:12.756 21:13:47 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:12.756 21:13:47 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:12.756 21:13:47 accel -- common/autotest_common.sh@10 -- # set +x 00:07:12.756 ************************************ 00:07:12.756 START TEST accel_dif_generate 00:07:12.756 ************************************ 00:07:12.756 21:13:47 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:07:12.756 [2024-07-11 21:13:47.160395] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:12.756 [2024-07-11 21:13:47.160459] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid784945 ] 00:07:12.756 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.756 [2024-07-11 21:13:47.224468] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.756 [2024-07-11 21:13:47.317096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.756 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:12.757 21:13:47 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:07:14.131 21:13:48 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:14.131 00:07:14.131 real 0m1.400s 00:07:14.131 user 0m1.262s 00:07:14.131 sys 0m0.141s 00:07:14.131 21:13:48 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.131 21:13:48 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:07:14.131 ************************************ 00:07:14.131 END TEST accel_dif_generate 00:07:14.131 ************************************ 00:07:14.131 21:13:48 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:14.131 21:13:48 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:07:14.131 21:13:48 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:07:14.131 21:13:48 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.131 21:13:48 accel -- common/autotest_common.sh@10 -- # set +x 00:07:14.131 ************************************ 00:07:14.131 START TEST accel_dif_generate_copy 00:07:14.131 ************************************ 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:07:14.131 [2024-07-11 21:13:48.605837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:14.131 [2024-07-11 21:13:48.605896] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785211 ] 00:07:14.131 EAL: No free 2048 kB hugepages reported on node 1 00:07:14.131 [2024-07-11 21:13:48.669381] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:14.131 [2024-07-11 21:13:48.763120] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.131 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:14.132 21:13:48 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:15.512 00:07:15.512 real 0m1.402s 00:07:15.512 user 0m1.263s 00:07:15.512 sys 0m0.140s 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:15.512 21:13:49 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:07:15.512 ************************************ 00:07:15.512 END TEST accel_dif_generate_copy 00:07:15.512 ************************************ 00:07:15.512 21:13:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:15.512 21:13:50 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:07:15.512 21:13:50 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.512 21:13:50 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:07:15.512 21:13:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:15.512 21:13:50 accel -- common/autotest_common.sh@10 -- # set +x 00:07:15.512 ************************************ 00:07:15.512 START TEST accel_comp 00:07:15.512 ************************************ 00:07:15.512 21:13:50 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:07:15.512 [2024-07-11 21:13:50.052669] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:15.512 [2024-07-11 21:13:50.052745] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785378 ] 00:07:15.512 EAL: No free 2048 kB hugepages reported on node 1 00:07:15.512 [2024-07-11 21:13:50.112541] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.512 [2024-07-11 21:13:50.201322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.512 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:15.513 21:13:50 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:07:16.893 21:13:51 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:16.893 00:07:16.893 real 0m1.401s 00:07:16.893 user 0m1.264s 00:07:16.893 sys 0m0.139s 00:07:16.893 21:13:51 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:16.893 21:13:51 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:07:16.893 ************************************ 00:07:16.893 END TEST accel_comp 00:07:16.893 ************************************ 00:07:16.893 21:13:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:16.893 21:13:51 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.893 21:13:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:07:16.893 21:13:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:16.893 21:13:51 accel -- common/autotest_common.sh@10 -- # set +x 00:07:16.893 ************************************ 00:07:16.893 START TEST accel_decomp 00:07:16.893 ************************************ 00:07:16.893 21:13:51 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:16.893 21:13:51 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:16.894 21:13:51 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:07:16.894 21:13:51 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:07:16.894 [2024-07-11 21:13:51.493974] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:16.894 [2024-07-11 21:13:51.494052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785531 ] 00:07:16.894 EAL: No free 2048 kB hugepages reported on node 1 00:07:16.894 [2024-07-11 21:13:51.556280] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:16.894 [2024-07-11 21:13:51.649644] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:17.153 21:13:51 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:18.534 21:13:52 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:18.534 00:07:18.534 real 0m1.407s 00:07:18.534 user 0m1.258s 00:07:18.534 sys 0m0.152s 00:07:18.534 21:13:52 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:18.534 21:13:52 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:07:18.534 ************************************ 00:07:18.534 END TEST accel_decomp 00:07:18.534 ************************************ 00:07:18.534 21:13:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:18.534 21:13:52 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.534 21:13:52 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:18.534 21:13:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:18.534 21:13:52 accel -- common/autotest_common.sh@10 -- # set +x 00:07:18.534 ************************************ 00:07:18.534 START TEST accel_decomp_full 00:07:18.534 ************************************ 00:07:18.534 21:13:52 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:07:18.534 21:13:52 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:07:18.534 [2024-07-11 21:13:52.939664] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:18.534 [2024-07-11 21:13:52.939719] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785684 ] 00:07:18.534 EAL: No free 2048 kB hugepages reported on node 1 00:07:18.534 [2024-07-11 21:13:53.001360] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:18.534 [2024-07-11 21:13:53.097176] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:18.534 21:13:53 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:19.941 21:13:54 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:19.941 00:07:19.941 real 0m1.415s 00:07:19.941 user 0m1.273s 00:07:19.941 sys 0m0.143s 00:07:19.941 21:13:54 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:19.941 21:13:54 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:07:19.941 ************************************ 00:07:19.941 END TEST accel_decomp_full 00:07:19.941 ************************************ 00:07:19.941 21:13:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:19.941 21:13:54 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.941 21:13:54 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:19.941 21:13:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:19.941 21:13:54 accel -- common/autotest_common.sh@10 -- # set +x 00:07:19.941 ************************************ 00:07:19.941 START TEST accel_decomp_mcore 00:07:19.941 ************************************ 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:19.941 [2024-07-11 21:13:54.399465] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:19.941 [2024-07-11 21:13:54.399534] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid785958 ] 00:07:19.941 EAL: No free 2048 kB hugepages reported on node 1 00:07:19.941 [2024-07-11 21:13:54.462506] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:19.941 [2024-07-11 21:13:54.558088] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:19.941 [2024-07-11 21:13:54.558141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:19.941 [2024-07-11 21:13:54.558260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:19.941 [2024-07-11 21:13:54.558263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.941 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:19.942 21:13:54 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.320 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:21.321 00:07:21.321 real 0m1.405s 00:07:21.321 user 0m4.686s 00:07:21.321 sys 0m0.147s 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:21.321 21:13:55 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:21.321 ************************************ 00:07:21.321 END TEST accel_decomp_mcore 00:07:21.321 ************************************ 00:07:21.321 21:13:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:21.321 21:13:55 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.321 21:13:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:21.321 21:13:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:21.321 21:13:55 accel -- common/autotest_common.sh@10 -- # set +x 00:07:21.321 ************************************ 00:07:21.321 START TEST accel_decomp_full_mcore 00:07:21.321 ************************************ 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:07:21.321 21:13:55 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:07:21.321 [2024-07-11 21:13:55.847316] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:21.321 [2024-07-11 21:13:55.847370] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786123 ] 00:07:21.321 EAL: No free 2048 kB hugepages reported on node 1 00:07:21.321 [2024-07-11 21:13:55.908389] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.321 [2024-07-11 21:13:56.003126] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.321 [2024-07-11 21:13:56.003181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.321 [2024-07-11 21:13:56.003293] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.321 [2024-07-11 21:13:56.003295] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:21.321 21:13:56 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:22.701 00:07:22.701 real 0m1.421s 00:07:22.701 user 0m4.754s 00:07:22.701 sys 0m0.150s 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.701 21:13:57 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:07:22.701 ************************************ 00:07:22.701 END TEST accel_decomp_full_mcore 00:07:22.701 ************************************ 00:07:22.701 21:13:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:22.702 21:13:57 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.702 21:13:57 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:07:22.702 21:13:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.702 21:13:57 accel -- common/autotest_common.sh@10 -- # set +x 00:07:22.702 ************************************ 00:07:22.702 START TEST accel_decomp_mthread 00:07:22.702 ************************************ 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:22.702 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:22.702 [2024-07-11 21:13:57.315110] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:22.702 [2024-07-11 21:13:57.315170] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786284 ] 00:07:22.702 EAL: No free 2048 kB hugepages reported on node 1 00:07:22.702 [2024-07-11 21:13:57.378062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.962 [2024-07-11 21:13:57.472198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:22.962 21:13:57 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:24.337 00:07:24.337 real 0m1.417s 00:07:24.337 user 0m1.270s 00:07:24.337 sys 0m0.150s 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:24.337 21:13:58 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:24.337 ************************************ 00:07:24.337 END TEST accel_decomp_mthread 00:07:24.337 ************************************ 00:07:24.337 21:13:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:24.337 21:13:58 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.337 21:13:58 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:07:24.337 21:13:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:24.337 21:13:58 accel -- common/autotest_common.sh@10 -- # set +x 00:07:24.337 ************************************ 00:07:24.337 START TEST accel_decomp_full_mthread 00:07:24.337 ************************************ 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:07:24.337 [2024-07-11 21:13:58.772005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:24.337 [2024-07-11 21:13:58.772079] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786500 ] 00:07:24.337 EAL: No free 2048 kB hugepages reported on node 1 00:07:24.337 [2024-07-11 21:13:58.835485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:24.337 [2024-07-11 21:13:58.929030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.337 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:58 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:24.338 21:13:59 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:07:25.716 00:07:25.716 real 0m1.441s 00:07:25.716 user 0m1.297s 00:07:25.716 sys 0m0.148s 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.716 21:14:00 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:07:25.716 ************************************ 00:07:25.716 END TEST accel_decomp_full_mthread 00:07:25.716 ************************************ 00:07:25.716 21:14:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:25.716 21:14:00 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:07:25.716 21:14:00 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:25.716 21:14:00 accel -- accel/accel.sh@137 -- # build_accel_config 00:07:25.716 21:14:00 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:07:25.716 21:14:00 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:25.716 21:14:00 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:25.716 21:14:00 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:07:25.716 21:14:00 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:07:25.716 21:14:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:25.716 21:14:00 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:07:25.716 21:14:00 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:07:25.716 21:14:00 accel -- accel/accel.sh@40 -- # local IFS=, 00:07:25.716 21:14:00 accel -- accel/accel.sh@41 -- # jq -r . 00:07:25.716 ************************************ 00:07:25.716 START TEST accel_dif_functional_tests 00:07:25.716 ************************************ 00:07:25.716 21:14:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:07:25.716 [2024-07-11 21:14:00.282808] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:25.716 [2024-07-11 21:14:00.282870] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786712 ] 00:07:25.716 EAL: No free 2048 kB hugepages reported on node 1 00:07:25.716 [2024-07-11 21:14:00.343714] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:25.716 [2024-07-11 21:14:00.441542] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:25.716 [2024-07-11 21:14:00.441605] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:25.716 [2024-07-11 21:14:00.441608] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:25.975 00:07:25.975 00:07:25.975 CUnit - A unit testing framework for C - Version 2.1-3 00:07:25.975 http://cunit.sourceforge.net/ 00:07:25.975 00:07:25.975 00:07:25.975 Suite: accel_dif 00:07:25.975 Test: verify: DIF generated, GUARD check ...passed 00:07:25.975 Test: verify: DIF generated, APPTAG check ...passed 00:07:25.975 Test: verify: DIF generated, REFTAG check ...passed 00:07:25.975 Test: verify: DIF not generated, GUARD check ...[2024-07-11 21:14:00.526165] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.975 passed 00:07:25.975 Test: verify: DIF not generated, APPTAG check ...[2024-07-11 21:14:00.526239] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.975 passed 00:07:25.975 Test: verify: DIF not generated, REFTAG check ...[2024-07-11 21:14:00.526270] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.975 passed 00:07:25.975 Test: verify: APPTAG correct, APPTAG check ...passed 00:07:25.975 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-11 21:14:00.526331] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:07:25.975 passed 00:07:25.975 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:07:25.975 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:07:25.975 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:07:25.975 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-11 21:14:00.526474] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:07:25.975 passed 00:07:25.975 Test: verify copy: DIF generated, GUARD check ...passed 00:07:25.975 Test: verify copy: DIF generated, APPTAG check ...passed 00:07:25.975 Test: verify copy: DIF generated, REFTAG check ...passed 00:07:25.975 Test: verify copy: DIF not generated, GUARD check ...[2024-07-11 21:14:00.526631] dif.c: 826:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:07:25.975 passed 00:07:25.975 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-11 21:14:00.526666] dif.c: 841:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:07:25.975 passed 00:07:25.975 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-11 21:14:00.526699] dif.c: 776:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:07:25.975 passed 00:07:25.975 Test: generate copy: DIF generated, GUARD check ...passed 00:07:25.975 Test: generate copy: DIF generated, APTTAG check ...passed 00:07:25.975 Test: generate copy: DIF generated, REFTAG check ...passed 00:07:25.975 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:07:25.975 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:07:25.975 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:07:25.975 Test: generate copy: iovecs-len validate ...[2024-07-11 21:14:00.526940] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:07:25.975 passed 00:07:25.975 Test: generate copy: buffer alignment validate ...passed 00:07:25.975 00:07:25.975 Run Summary: Type Total Ran Passed Failed Inactive 00:07:25.975 suites 1 1 n/a 0 0 00:07:25.975 tests 26 26 26 0 0 00:07:25.975 asserts 115 115 115 0 n/a 00:07:25.975 00:07:25.975 Elapsed time = 0.002 seconds 00:07:25.975 00:07:25.975 real 0m0.492s 00:07:25.975 user 0m0.754s 00:07:25.975 sys 0m0.173s 00:07:25.975 21:14:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:25.975 21:14:00 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:07:25.975 ************************************ 00:07:25.975 END TEST accel_dif_functional_tests 00:07:25.975 ************************************ 00:07:26.234 21:14:00 accel -- common/autotest_common.sh@1142 -- # return 0 00:07:26.234 00:07:26.234 real 0m31.662s 00:07:26.234 user 0m35.054s 00:07:26.234 sys 0m4.543s 00:07:26.234 21:14:00 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.234 21:14:00 accel -- common/autotest_common.sh@10 -- # set +x 00:07:26.234 ************************************ 00:07:26.234 END TEST accel 00:07:26.234 ************************************ 00:07:26.234 21:14:00 -- common/autotest_common.sh@1142 -- # return 0 00:07:26.234 21:14:00 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:26.234 21:14:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.234 21:14:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.234 21:14:00 -- common/autotest_common.sh@10 -- # set +x 00:07:26.234 ************************************ 00:07:26.234 START TEST accel_rpc 00:07:26.234 ************************************ 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:07:26.234 * Looking for test storage... 00:07:26.234 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:07:26.234 21:14:00 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:07:26.234 21:14:00 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=786781 00:07:26.234 21:14:00 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:07:26.234 21:14:00 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 786781 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 786781 ']' 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:26.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:26.234 21:14:00 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.234 [2024-07-11 21:14:00.915107] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:26.234 [2024-07-11 21:14:00.915190] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786781 ] 00:07:26.234 EAL: No free 2048 kB hugepages reported on node 1 00:07:26.234 [2024-07-11 21:14:00.982439] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:26.494 [2024-07-11 21:14:01.077765] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:26.494 21:14:01 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:26.494 21:14:01 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:07:26.494 21:14:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:07:26.494 21:14:01 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:07:26.494 21:14:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:07:26.494 21:14:01 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:07:26.494 21:14:01 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:07:26.494 21:14:01 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:26.494 21:14:01 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:26.494 21:14:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:26.494 ************************************ 00:07:26.494 START TEST accel_assign_opcode 00:07:26.494 ************************************ 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.494 [2024-07-11 21:14:01.134349] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.494 [2024-07-11 21:14:01.142367] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.494 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.752 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.752 21:14:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:07:26.752 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:26.752 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.752 21:14:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:07:26.752 21:14:01 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:07:26.753 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:26.753 software 00:07:26.753 00:07:26.753 real 0m0.294s 00:07:26.753 user 0m0.040s 00:07:26.753 sys 0m0.006s 00:07:26.753 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:26.753 21:14:01 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:07:26.753 ************************************ 00:07:26.753 END TEST accel_assign_opcode 00:07:26.753 ************************************ 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:07:26.753 21:14:01 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 786781 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 786781 ']' 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 786781 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 786781 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 786781' 00:07:26.753 killing process with pid 786781 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@967 -- # kill 786781 00:07:26.753 21:14:01 accel_rpc -- common/autotest_common.sh@972 -- # wait 786781 00:07:27.319 00:07:27.319 real 0m1.064s 00:07:27.319 user 0m0.967s 00:07:27.319 sys 0m0.449s 00:07:27.319 21:14:01 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:27.319 21:14:01 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:07:27.319 ************************************ 00:07:27.319 END TEST accel_rpc 00:07:27.319 ************************************ 00:07:27.319 21:14:01 -- common/autotest_common.sh@1142 -- # return 0 00:07:27.319 21:14:01 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.319 21:14:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:27.319 21:14:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:27.319 21:14:01 -- common/autotest_common.sh@10 -- # set +x 00:07:27.319 ************************************ 00:07:27.319 START TEST app_cmdline 00:07:27.319 ************************************ 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:07:27.319 * Looking for test storage... 00:07:27.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:27.319 21:14:01 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:07:27.319 21:14:01 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=786987 00:07:27.319 21:14:01 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:07:27.319 21:14:01 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 786987 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 786987 ']' 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:27.319 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:27.319 21:14:01 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:27.319 [2024-07-11 21:14:02.030708] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:27.319 [2024-07-11 21:14:02.030821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid786987 ] 00:07:27.319 EAL: No free 2048 kB hugepages reported on node 1 00:07:27.579 [2024-07-11 21:14:02.093220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:27.579 [2024-07-11 21:14:02.188300] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:27.838 21:14:02 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:27.838 21:14:02 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:07:27.838 21:14:02 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:07:28.096 { 00:07:28.096 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:07:28.096 "fields": { 00:07:28.096 "major": 24, 00:07:28.096 "minor": 9, 00:07:28.096 "patch": 0, 00:07:28.096 "suffix": "-pre", 00:07:28.096 "commit": "719d03c6a" 00:07:28.096 } 00:07:28.096 } 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@26 -- # sort 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:07:28.096 21:14:02 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:28.096 21:14:02 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:07:28.355 request: 00:07:28.355 { 00:07:28.355 "method": "env_dpdk_get_mem_stats", 00:07:28.355 "req_id": 1 00:07:28.355 } 00:07:28.355 Got JSON-RPC error response 00:07:28.355 response: 00:07:28.355 { 00:07:28.355 "code": -32601, 00:07:28.355 "message": "Method not found" 00:07:28.355 } 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:28.355 21:14:02 app_cmdline -- app/cmdline.sh@1 -- # killprocess 786987 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 786987 ']' 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 786987 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:28.355 21:14:02 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 786987 00:07:28.355 21:14:03 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:28.355 21:14:03 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:28.355 21:14:03 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 786987' 00:07:28.355 killing process with pid 786987 00:07:28.355 21:14:03 app_cmdline -- common/autotest_common.sh@967 -- # kill 786987 00:07:28.355 21:14:03 app_cmdline -- common/autotest_common.sh@972 -- # wait 786987 00:07:28.923 00:07:28.923 real 0m1.468s 00:07:28.923 user 0m1.806s 00:07:28.923 sys 0m0.474s 00:07:28.923 21:14:03 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.923 21:14:03 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:07:28.923 ************************************ 00:07:28.923 END TEST app_cmdline 00:07:28.923 ************************************ 00:07:28.923 21:14:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.923 21:14:03 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.923 21:14:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:28.923 21:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.923 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.923 ************************************ 00:07:28.923 START TEST version 00:07:28.923 ************************************ 00:07:28.923 21:14:03 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:07:28.923 * Looking for test storage... 00:07:28.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:28.923 21:14:03 version -- app/version.sh@17 -- # get_header_version major 00:07:28.923 21:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # cut -f2 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.923 21:14:03 version -- app/version.sh@17 -- # major=24 00:07:28.923 21:14:03 version -- app/version.sh@18 -- # get_header_version minor 00:07:28.923 21:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # cut -f2 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.923 21:14:03 version -- app/version.sh@18 -- # minor=9 00:07:28.923 21:14:03 version -- app/version.sh@19 -- # get_header_version patch 00:07:28.923 21:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # cut -f2 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.923 21:14:03 version -- app/version.sh@19 -- # patch=0 00:07:28.923 21:14:03 version -- app/version.sh@20 -- # get_header_version suffix 00:07:28.923 21:14:03 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # cut -f2 00:07:28.923 21:14:03 version -- app/version.sh@14 -- # tr -d '"' 00:07:28.923 21:14:03 version -- app/version.sh@20 -- # suffix=-pre 00:07:28.923 21:14:03 version -- app/version.sh@22 -- # version=24.9 00:07:28.923 21:14:03 version -- app/version.sh@25 -- # (( patch != 0 )) 00:07:28.923 21:14:03 version -- app/version.sh@28 -- # version=24.9rc0 00:07:28.923 21:14:03 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:28.923 21:14:03 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:07:28.923 21:14:03 version -- app/version.sh@30 -- # py_version=24.9rc0 00:07:28.923 21:14:03 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:07:28.923 00:07:28.923 real 0m0.104s 00:07:28.923 user 0m0.053s 00:07:28.923 sys 0m0.073s 00:07:28.923 21:14:03 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:28.923 21:14:03 version -- common/autotest_common.sh@10 -- # set +x 00:07:28.923 ************************************ 00:07:28.923 END TEST version 00:07:28.923 ************************************ 00:07:28.923 21:14:03 -- common/autotest_common.sh@1142 -- # return 0 00:07:28.923 21:14:03 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@198 -- # uname -s 00:07:28.923 21:14:03 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:07:28.923 21:14:03 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:28.923 21:14:03 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:07:28.923 21:14:03 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@260 -- # timing_exit lib 00:07:28.923 21:14:03 -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:28.923 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.923 21:14:03 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:07:28.923 21:14:03 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:07:28.923 21:14:03 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:28.923 21:14:03 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.923 21:14:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.923 21:14:03 -- common/autotest_common.sh@10 -- # set +x 00:07:28.923 ************************************ 00:07:28.923 START TEST nvmf_tcp 00:07:28.923 ************************************ 00:07:28.923 21:14:03 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:07:28.923 * Looking for test storage... 00:07:28.923 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:28.923 21:14:03 nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:28.924 21:14:03 nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:28.924 21:14:03 nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:28.924 21:14:03 nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:28.924 21:14:03 nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.924 21:14:03 nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.924 21:14:03 nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.924 21:14:03 nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:07:28.924 21:14:03 nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@18 -- # TEST_ARGS=("$@") 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@20 -- # timing_enter target 00:07:28.924 21:14:03 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:28.924 21:14:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@22 -- # [[ 0 -eq 0 ]] 00:07:28.924 21:14:03 nvmf_tcp -- nvmf/nvmf.sh@23 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:28.924 21:14:03 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:28.924 21:14:03 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:28.924 21:14:03 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:29.182 ************************************ 00:07:29.182 START TEST nvmf_example 00:07:29.182 ************************************ 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:07:29.182 * Looking for test storage... 00:07:29.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- paths/export.sh@5 -- # export PATH 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:29.182 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:07:29.183 21:14:03 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:31.088 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:31.088 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:31.088 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:31.088 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:31.088 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:31.088 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:07:31.088 00:07:31.088 --- 10.0.0.2 ping statistics --- 00:07:31.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.088 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:31.088 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:31.088 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.099 ms 00:07:31.088 00:07:31.088 --- 10.0.0.1 ping statistics --- 00:07:31.088 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:31.088 rtt min/avg/max/mdev = 0.099/0.099/0.099/0.000 ms 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:07:31.088 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=789000 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 789000 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 789000 ']' 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:31.089 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:31.089 21:14:05 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:31.348 EAL: No free 2048 kB hugepages reported on node 1 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:07:32.285 21:14:06 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:07:32.285 EAL: No free 2048 kB hugepages reported on node 1 00:07:44.496 Initializing NVMe Controllers 00:07:44.496 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.496 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:44.496 Initialization complete. Launching workers. 00:07:44.496 ======================================================== 00:07:44.496 Latency(us) 00:07:44.496 Device Information : IOPS MiB/s Average min max 00:07:44.496 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 14459.32 56.48 4425.63 877.36 15393.37 00:07:44.496 ======================================================== 00:07:44.496 Total : 14459.32 56.48 4425.63 877.36 15393.37 00:07:44.496 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@117 -- # sync 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:44.496 rmmod nvme_tcp 00:07:44.496 rmmod nvme_fabrics 00:07:44.496 rmmod nvme_keyring 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 789000 ']' 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@490 -- # killprocess 789000 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 789000 ']' 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 789000 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 789000 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 789000' 00:07:44.496 killing process with pid 789000 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@967 -- # kill 789000 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@972 -- # wait 789000 00:07:44.496 nvmf threads initialize successfully 00:07:44.496 bdev subsystem init successfully 00:07:44.496 created a nvmf target service 00:07:44.496 create targets's poll groups done 00:07:44.496 all subsystems of target started 00:07:44.496 nvmf target is running 00:07:44.496 all subsystems of target stopped 00:07:44.496 destroy targets's poll groups done 00:07:44.496 destroyed the nvmf target service 00:07:44.496 bdev subsystem finish successfully 00:07:44.496 nvmf threads destroy successfully 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:44.496 21:14:17 nvmf_tcp.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.497 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:44.497 21:14:17 nvmf_tcp.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.755 21:14:19 nvmf_tcp.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:44.755 21:14:19 nvmf_tcp.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:07:44.755 21:14:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:44.755 21:14:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:44.755 00:07:44.755 real 0m15.786s 00:07:44.755 user 0m45.086s 00:07:44.755 sys 0m3.148s 00:07:44.755 21:14:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.755 21:14:19 nvmf_tcp.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:07:44.755 ************************************ 00:07:44.755 END TEST nvmf_example 00:07:44.755 ************************************ 00:07:44.755 21:14:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:07:44.755 21:14:19 nvmf_tcp -- nvmf/nvmf.sh@24 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:44.755 21:14:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:44.755 21:14:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.755 21:14:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:07:45.016 ************************************ 00:07:45.016 START TEST nvmf_filesystem 00:07:45.016 ************************************ 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:07:45.017 * Looking for test storage... 00:07:45.017 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:07:45.017 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:07:45.017 #define SPDK_CONFIG_H 00:07:45.017 #define SPDK_CONFIG_APPS 1 00:07:45.017 #define SPDK_CONFIG_ARCH native 00:07:45.017 #undef SPDK_CONFIG_ASAN 00:07:45.017 #undef SPDK_CONFIG_AVAHI 00:07:45.017 #undef SPDK_CONFIG_CET 00:07:45.017 #define SPDK_CONFIG_COVERAGE 1 00:07:45.017 #define SPDK_CONFIG_CROSS_PREFIX 00:07:45.017 #undef SPDK_CONFIG_CRYPTO 00:07:45.017 #undef SPDK_CONFIG_CRYPTO_MLX5 00:07:45.017 #undef SPDK_CONFIG_CUSTOMOCF 00:07:45.017 #undef SPDK_CONFIG_DAOS 00:07:45.017 #define SPDK_CONFIG_DAOS_DIR 00:07:45.017 #define SPDK_CONFIG_DEBUG 1 00:07:45.017 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:07:45.017 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:45.017 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:07:45.017 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.017 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:07:45.017 #undef SPDK_CONFIG_DPDK_UADK 00:07:45.018 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:07:45.018 #define SPDK_CONFIG_EXAMPLES 1 00:07:45.018 #undef SPDK_CONFIG_FC 00:07:45.018 #define SPDK_CONFIG_FC_PATH 00:07:45.018 #define SPDK_CONFIG_FIO_PLUGIN 1 00:07:45.018 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:07:45.018 #undef SPDK_CONFIG_FUSE 00:07:45.018 #undef SPDK_CONFIG_FUZZER 00:07:45.018 #define SPDK_CONFIG_FUZZER_LIB 00:07:45.018 #undef SPDK_CONFIG_GOLANG 00:07:45.018 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:07:45.018 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:07:45.018 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:07:45.018 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:07:45.018 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:07:45.018 #undef SPDK_CONFIG_HAVE_LIBBSD 00:07:45.018 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:07:45.018 #define SPDK_CONFIG_IDXD 1 00:07:45.018 #define SPDK_CONFIG_IDXD_KERNEL 1 00:07:45.018 #undef SPDK_CONFIG_IPSEC_MB 00:07:45.018 #define SPDK_CONFIG_IPSEC_MB_DIR 00:07:45.018 #define SPDK_CONFIG_ISAL 1 00:07:45.018 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:07:45.018 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:07:45.018 #define SPDK_CONFIG_LIBDIR 00:07:45.018 #undef SPDK_CONFIG_LTO 00:07:45.018 #define SPDK_CONFIG_MAX_LCORES 128 00:07:45.018 #define SPDK_CONFIG_NVME_CUSE 1 00:07:45.018 #undef SPDK_CONFIG_OCF 00:07:45.018 #define SPDK_CONFIG_OCF_PATH 00:07:45.018 #define SPDK_CONFIG_OPENSSL_PATH 00:07:45.018 #undef SPDK_CONFIG_PGO_CAPTURE 00:07:45.018 #define SPDK_CONFIG_PGO_DIR 00:07:45.018 #undef SPDK_CONFIG_PGO_USE 00:07:45.018 #define SPDK_CONFIG_PREFIX /usr/local 00:07:45.018 #undef SPDK_CONFIG_RAID5F 00:07:45.018 #undef SPDK_CONFIG_RBD 00:07:45.018 #define SPDK_CONFIG_RDMA 1 00:07:45.018 #define SPDK_CONFIG_RDMA_PROV verbs 00:07:45.018 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:07:45.018 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:07:45.018 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:07:45.018 #define SPDK_CONFIG_SHARED 1 00:07:45.018 #undef SPDK_CONFIG_SMA 00:07:45.018 #define SPDK_CONFIG_TESTS 1 00:07:45.018 #undef SPDK_CONFIG_TSAN 00:07:45.018 #define SPDK_CONFIG_UBLK 1 00:07:45.018 #define SPDK_CONFIG_UBSAN 1 00:07:45.018 #undef SPDK_CONFIG_UNIT_TESTS 00:07:45.018 #undef SPDK_CONFIG_URING 00:07:45.018 #define SPDK_CONFIG_URING_PATH 00:07:45.018 #undef SPDK_CONFIG_URING_ZNS 00:07:45.018 #undef SPDK_CONFIG_USDT 00:07:45.018 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:07:45.018 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:07:45.018 #define SPDK_CONFIG_VFIO_USER 1 00:07:45.018 #define SPDK_CONFIG_VFIO_USER_DIR 00:07:45.018 #define SPDK_CONFIG_VHOST 1 00:07:45.018 #define SPDK_CONFIG_VIRTIO 1 00:07:45.018 #undef SPDK_CONFIG_VTUNE 00:07:45.018 #define SPDK_CONFIG_VTUNE_DIR 00:07:45.018 #define SPDK_CONFIG_WERROR 1 00:07:45.018 #define SPDK_CONFIG_WPDK_DIR 00:07:45.018 #undef SPDK_CONFIG_XNVME 00:07:45.018 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # uname -s 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:07:45.018 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@124 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@138 -- # : v23.11 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.019 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j48 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 790712 ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 790712 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.HNYN8R 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.HNYN8R/tests/target /tmp/spdk.HNYN8R 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=953643008 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4330786816 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=53013192704 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=61994725376 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8981532672 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30941724672 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997360640 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=55635968 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=12390187008 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=12398948352 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=8761344 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=30996717568 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=30997364736 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=647168 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=6199468032 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=6199472128 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:07:45.020 * Looking for test storage... 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=53013192704 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=11196125184 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.020 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 14 ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/14 ]] 00:07:45.020 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:45.021 21:14:19 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:46.974 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:46.974 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:46.974 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:46.974 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:46.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:46.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:07:46.974 00:07:46.974 --- 10.0.0.2 ping statistics --- 00:07:46.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.974 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:46.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:46.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:07:46.974 00:07:46.974 --- 10.0.0.1 ping statistics --- 00:07:46.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:46.974 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:46.974 21:14:21 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:07:47.235 ************************************ 00:07:47.235 START TEST nvmf_filesystem_no_in_capsule 00:07:47.235 ************************************ 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=792335 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 792335 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 792335 ']' 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:47.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:47.235 21:14:21 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.235 [2024-07-11 21:14:21.821868] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:07:47.235 [2024-07-11 21:14:21.821949] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:47.235 EAL: No free 2048 kB hugepages reported on node 1 00:07:47.235 [2024-07-11 21:14:21.885970] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:47.235 [2024-07-11 21:14:21.979438] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:47.235 [2024-07-11 21:14:21.979498] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:47.235 [2024-07-11 21:14:21.979511] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:47.235 [2024-07-11 21:14:21.979522] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:47.235 [2024-07-11 21:14:21.979531] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:47.235 [2024-07-11 21:14:21.979616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:47.235 [2024-07-11 21:14:21.979680] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:47.235 [2024-07-11 21:14:21.979746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:47.235 [2024-07-11 21:14:21.979747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.495 [2024-07-11 21:14:22.129576] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.495 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 Malloc1 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 [2024-07-11 21:14:22.313256] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:07:47.756 { 00:07:47.756 "name": "Malloc1", 00:07:47.756 "aliases": [ 00:07:47.756 "f2b1cb32-c5e9-479d-920a-eb66fedf57a8" 00:07:47.756 ], 00:07:47.756 "product_name": "Malloc disk", 00:07:47.756 "block_size": 512, 00:07:47.756 "num_blocks": 1048576, 00:07:47.756 "uuid": "f2b1cb32-c5e9-479d-920a-eb66fedf57a8", 00:07:47.756 "assigned_rate_limits": { 00:07:47.756 "rw_ios_per_sec": 0, 00:07:47.756 "rw_mbytes_per_sec": 0, 00:07:47.756 "r_mbytes_per_sec": 0, 00:07:47.756 "w_mbytes_per_sec": 0 00:07:47.756 }, 00:07:47.756 "claimed": true, 00:07:47.756 "claim_type": "exclusive_write", 00:07:47.756 "zoned": false, 00:07:47.756 "supported_io_types": { 00:07:47.756 "read": true, 00:07:47.756 "write": true, 00:07:47.756 "unmap": true, 00:07:47.756 "flush": true, 00:07:47.756 "reset": true, 00:07:47.756 "nvme_admin": false, 00:07:47.756 "nvme_io": false, 00:07:47.756 "nvme_io_md": false, 00:07:47.756 "write_zeroes": true, 00:07:47.756 "zcopy": true, 00:07:47.756 "get_zone_info": false, 00:07:47.756 "zone_management": false, 00:07:47.756 "zone_append": false, 00:07:47.756 "compare": false, 00:07:47.756 "compare_and_write": false, 00:07:47.756 "abort": true, 00:07:47.756 "seek_hole": false, 00:07:47.756 "seek_data": false, 00:07:47.756 "copy": true, 00:07:47.756 "nvme_iov_md": false 00:07:47.756 }, 00:07:47.756 "memory_domains": [ 00:07:47.756 { 00:07:47.756 "dma_device_id": "system", 00:07:47.756 "dma_device_type": 1 00:07:47.756 }, 00:07:47.756 { 00:07:47.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:47.756 "dma_device_type": 2 00:07:47.756 } 00:07:47.756 ], 00:07:47.756 "driver_specific": {} 00:07:47.756 } 00:07:47.756 ]' 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:07:47.756 21:14:22 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:48.324 21:14:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:07:48.324 21:14:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:07:48.324 21:14:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:07:48.324 21:14:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:07:48.324 21:14:23 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:07:50.858 21:14:25 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:07:51.794 21:14:26 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:52.734 ************************************ 00:07:52.734 START TEST filesystem_ext4 00:07:52.734 ************************************ 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:07:52.734 21:14:27 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:07:52.734 mke2fs 1.46.5 (30-Dec-2021) 00:07:52.735 Discarding device blocks: 0/522240 done 00:07:52.735 Creating filesystem with 522240 1k blocks and 130560 inodes 00:07:52.735 Filesystem UUID: f24bbf39-3ff8-4d30-bc4d-b21669bcae49 00:07:52.735 Superblock backups stored on blocks: 00:07:52.735 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:07:52.735 00:07:52.735 Allocating group tables: 0/64 done 00:07:52.735 Writing inode tables: 0/64 done 00:07:56.025 Creating journal (8192 blocks): done 00:07:56.544 Writing superblocks and filesystem accounting information: 0/6426/64 done 00:07:56.544 00:07:56.544 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:07:56.544 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 792335 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:56.803 00:07:56.803 real 0m4.155s 00:07:56.803 user 0m0.024s 00:07:56.803 sys 0m0.053s 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:07:56.803 ************************************ 00:07:56.803 END TEST filesystem_ext4 00:07:56.803 ************************************ 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:56.803 ************************************ 00:07:56.803 START TEST filesystem_btrfs 00:07:56.803 ************************************ 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:07:56.803 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:07:57.371 btrfs-progs v6.6.2 00:07:57.371 See https://btrfs.readthedocs.io for more information. 00:07:57.371 00:07:57.371 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:07:57.371 NOTE: several default settings have changed in version 5.15, please make sure 00:07:57.371 this does not affect your deployments: 00:07:57.371 - DUP for metadata (-m dup) 00:07:57.371 - enabled no-holes (-O no-holes) 00:07:57.371 - enabled free-space-tree (-R free-space-tree) 00:07:57.371 00:07:57.371 Label: (null) 00:07:57.371 UUID: 7ce62af5-2f7e-496d-9b9f-50f87daa2caa 00:07:57.371 Node size: 16384 00:07:57.371 Sector size: 4096 00:07:57.371 Filesystem size: 510.00MiB 00:07:57.372 Block group profiles: 00:07:57.372 Data: single 8.00MiB 00:07:57.372 Metadata: DUP 32.00MiB 00:07:57.372 System: DUP 8.00MiB 00:07:57.372 SSD detected: yes 00:07:57.372 Zoned device: no 00:07:57.372 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:07:57.372 Runtime features: free-space-tree 00:07:57.372 Checksum: crc32c 00:07:57.372 Number of devices: 1 00:07:57.372 Devices: 00:07:57.372 ID SIZE PATH 00:07:57.372 1 510.00MiB /dev/nvme0n1p1 00:07:57.372 00:07:57.372 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:07:57.372 21:14:31 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:07:58.307 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 792335 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:07:58.308 00:07:58.308 real 0m1.371s 00:07:58.308 user 0m0.014s 00:07:58.308 sys 0m0.116s 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:07:58.308 ************************************ 00:07:58.308 END TEST filesystem_btrfs 00:07:58.308 ************************************ 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:07:58.308 ************************************ 00:07:58.308 START TEST filesystem_xfs 00:07:58.308 ************************************ 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:07:58.308 21:14:32 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:07:58.308 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:07:58.308 = sectsz=512 attr=2, projid32bit=1 00:07:58.308 = crc=1 finobt=1, sparse=1, rmapbt=0 00:07:58.308 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:07:58.308 data = bsize=4096 blocks=130560, imaxpct=25 00:07:58.308 = sunit=0 swidth=0 blks 00:07:58.308 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:07:58.308 log =internal log bsize=4096 blocks=16384, version=2 00:07:58.308 = sectsz=512 sunit=0 blks, lazy-count=1 00:07:58.308 realtime =none extsz=4096 blocks=0, rtextents=0 00:07:59.685 Discarding blocks...Done. 00:07:59.685 21:14:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:07:59.685 21:14:34 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 792335 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:02.220 00:08:02.220 real 0m3.596s 00:08:02.220 user 0m0.020s 00:08:02.220 sys 0m0.057s 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:02.220 ************************************ 00:08:02.220 END TEST filesystem_xfs 00:08:02.220 ************************************ 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:02.220 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:02.221 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 792335 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 792335 ']' 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 792335 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 792335 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 792335' 00:08:02.221 killing process with pid 792335 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 792335 00:08:02.221 21:14:36 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 792335 00:08:02.479 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:02.479 00:08:02.479 real 0m15.455s 00:08:02.479 user 0m59.590s 00:08:02.479 sys 0m2.053s 00:08:02.479 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:02.479 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.479 ************************************ 00:08:02.479 END TEST nvmf_filesystem_no_in_capsule 00:08:02.479 ************************************ 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:02.739 ************************************ 00:08:02.739 START TEST nvmf_filesystem_in_capsule 00:08:02.739 ************************************ 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=794314 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 794314 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 794314 ']' 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:02.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:02.739 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.739 [2024-07-11 21:14:37.333949] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:02.739 [2024-07-11 21:14:37.334026] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:02.739 EAL: No free 2048 kB hugepages reported on node 1 00:08:02.739 [2024-07-11 21:14:37.401073] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:02.739 [2024-07-11 21:14:37.486450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:02.739 [2024-07-11 21:14:37.486517] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:02.739 [2024-07-11 21:14:37.486548] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:02.739 [2024-07-11 21:14:37.486559] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:02.739 [2024-07-11 21:14:37.486568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:02.739 [2024-07-11 21:14:37.486649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:02.739 [2024-07-11 21:14:37.486674] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:02.739 [2024-07-11 21:14:37.486729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:02.739 [2024-07-11 21:14:37.486731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:02.999 [2024-07-11 21:14:37.646579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:02.999 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.259 Malloc1 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.259 [2024-07-11 21:14:37.837205] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:08:03.259 { 00:08:03.259 "name": "Malloc1", 00:08:03.259 "aliases": [ 00:08:03.259 "33e559af-f77a-4796-b949-544fd840ff1d" 00:08:03.259 ], 00:08:03.259 "product_name": "Malloc disk", 00:08:03.259 "block_size": 512, 00:08:03.259 "num_blocks": 1048576, 00:08:03.259 "uuid": "33e559af-f77a-4796-b949-544fd840ff1d", 00:08:03.259 "assigned_rate_limits": { 00:08:03.259 "rw_ios_per_sec": 0, 00:08:03.259 "rw_mbytes_per_sec": 0, 00:08:03.259 "r_mbytes_per_sec": 0, 00:08:03.259 "w_mbytes_per_sec": 0 00:08:03.259 }, 00:08:03.259 "claimed": true, 00:08:03.259 "claim_type": "exclusive_write", 00:08:03.259 "zoned": false, 00:08:03.259 "supported_io_types": { 00:08:03.259 "read": true, 00:08:03.259 "write": true, 00:08:03.259 "unmap": true, 00:08:03.259 "flush": true, 00:08:03.259 "reset": true, 00:08:03.259 "nvme_admin": false, 00:08:03.259 "nvme_io": false, 00:08:03.259 "nvme_io_md": false, 00:08:03.259 "write_zeroes": true, 00:08:03.259 "zcopy": true, 00:08:03.259 "get_zone_info": false, 00:08:03.259 "zone_management": false, 00:08:03.259 "zone_append": false, 00:08:03.259 "compare": false, 00:08:03.259 "compare_and_write": false, 00:08:03.259 "abort": true, 00:08:03.259 "seek_hole": false, 00:08:03.259 "seek_data": false, 00:08:03.259 "copy": true, 00:08:03.259 "nvme_iov_md": false 00:08:03.259 }, 00:08:03.259 "memory_domains": [ 00:08:03.259 { 00:08:03.259 "dma_device_id": "system", 00:08:03.259 "dma_device_type": 1 00:08:03.259 }, 00:08:03.259 { 00:08:03.259 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:03.259 "dma_device_type": 2 00:08:03.259 } 00:08:03.259 ], 00:08:03.259 "driver_specific": {} 00:08:03.259 } 00:08:03.259 ]' 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:08:03.259 21:14:37 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:03.828 21:14:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:08:03.828 21:14:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:08:03.828 21:14:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:03.828 21:14:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:03.828 21:14:38 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:08:06.374 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:06.374 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:08:06.375 21:14:40 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:08:07.344 21:14:41 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:08.284 ************************************ 00:08:08.284 START TEST filesystem_in_capsule_ext4 00:08:08.284 ************************************ 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:08:08.284 21:14:42 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:08:08.284 mke2fs 1.46.5 (30-Dec-2021) 00:08:08.284 Discarding device blocks: 0/522240 done 00:08:08.284 Creating filesystem with 522240 1k blocks and 130560 inodes 00:08:08.284 Filesystem UUID: 4fb3c410-76d9-4e96-bded-0277d9491a4b 00:08:08.284 Superblock backups stored on blocks: 00:08:08.284 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:08:08.284 00:08:08.284 Allocating group tables: 0/64 done 00:08:08.284 Writing inode tables: 0/64 done 00:08:08.542 Creating journal (8192 blocks): done 00:08:08.542 Writing superblocks and filesystem accounting information: 0/64 done 00:08:08.542 00:08:08.542 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:08:08.542 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 794314 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:09.117 00:08:09.117 real 0m0.898s 00:08:09.117 user 0m0.019s 00:08:09.117 sys 0m0.058s 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:08:09.117 ************************************ 00:08:09.117 END TEST filesystem_in_capsule_ext4 00:08:09.117 ************************************ 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:09.117 ************************************ 00:08:09.117 START TEST filesystem_in_capsule_btrfs 00:08:09.117 ************************************ 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:08:09.117 21:14:43 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:08:09.686 btrfs-progs v6.6.2 00:08:09.686 See https://btrfs.readthedocs.io for more information. 00:08:09.686 00:08:09.686 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:08:09.686 NOTE: several default settings have changed in version 5.15, please make sure 00:08:09.686 this does not affect your deployments: 00:08:09.686 - DUP for metadata (-m dup) 00:08:09.686 - enabled no-holes (-O no-holes) 00:08:09.686 - enabled free-space-tree (-R free-space-tree) 00:08:09.686 00:08:09.686 Label: (null) 00:08:09.686 UUID: 60315d4a-30d8-4f04-a662-3df67496bd72 00:08:09.686 Node size: 16384 00:08:09.686 Sector size: 4096 00:08:09.686 Filesystem size: 510.00MiB 00:08:09.686 Block group profiles: 00:08:09.686 Data: single 8.00MiB 00:08:09.686 Metadata: DUP 32.00MiB 00:08:09.686 System: DUP 8.00MiB 00:08:09.686 SSD detected: yes 00:08:09.686 Zoned device: no 00:08:09.686 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:08:09.686 Runtime features: free-space-tree 00:08:09.686 Checksum: crc32c 00:08:09.686 Number of devices: 1 00:08:09.686 Devices: 00:08:09.686 ID SIZE PATH 00:08:09.686 1 510.00MiB /dev/nvme0n1p1 00:08:09.686 00:08:09.686 21:14:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:08:09.686 21:14:44 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 794314 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:10.624 00:08:10.624 real 0m1.272s 00:08:10.624 user 0m0.018s 00:08:10.624 sys 0m0.115s 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:08:10.624 ************************************ 00:08:10.624 END TEST filesystem_in_capsule_btrfs 00:08:10.624 ************************************ 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:10.624 ************************************ 00:08:10.624 START TEST filesystem_in_capsule_xfs 00:08:10.624 ************************************ 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:08:10.624 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:08:10.625 21:14:45 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:08:10.625 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:08:10.625 = sectsz=512 attr=2, projid32bit=1 00:08:10.625 = crc=1 finobt=1, sparse=1, rmapbt=0 00:08:10.625 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:08:10.625 data = bsize=4096 blocks=130560, imaxpct=25 00:08:10.625 = sunit=0 swidth=0 blks 00:08:10.625 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:08:10.625 log =internal log bsize=4096 blocks=16384, version=2 00:08:10.625 = sectsz=512 sunit=0 blks, lazy-count=1 00:08:10.625 realtime =none extsz=4096 blocks=0, rtextents=0 00:08:11.558 Discarding blocks...Done. 00:08:11.558 21:14:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:08:11.558 21:14:46 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 794314 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:08:13.461 00:08:13.461 real 0m2.599s 00:08:13.461 user 0m0.017s 00:08:13.461 sys 0m0.059s 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:08:13.461 ************************************ 00:08:13.461 END TEST filesystem_in_capsule_xfs 00:08:13.461 ************************************ 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:08:13.461 21:14:47 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:13.461 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 794314 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 794314 ']' 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 794314 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 794314 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 794314' 00:08:13.461 killing process with pid 794314 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 794314 00:08:13.461 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 794314 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:08:14.028 00:08:14.028 real 0m11.363s 00:08:14.028 user 0m43.549s 00:08:14.028 sys 0m1.799s 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:08:14.028 ************************************ 00:08:14.028 END TEST nvmf_filesystem_in_capsule 00:08:14.028 ************************************ 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:14.028 rmmod nvme_tcp 00:08:14.028 rmmod nvme_fabrics 00:08:14.028 rmmod nvme_keyring 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:14.028 21:14:48 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.564 21:14:50 nvmf_tcp.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:16.564 00:08:16.564 real 0m31.233s 00:08:16.564 user 1m44.005s 00:08:16.564 sys 0m5.391s 00:08:16.564 21:14:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.564 21:14:50 nvmf_tcp.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:08:16.564 ************************************ 00:08:16.564 END TEST nvmf_filesystem 00:08:16.564 ************************************ 00:08:16.564 21:14:50 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:16.564 21:14:50 nvmf_tcp -- nvmf/nvmf.sh@25 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.564 21:14:50 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:16.564 21:14:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.564 21:14:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:16.564 ************************************ 00:08:16.564 START TEST nvmf_target_discovery 00:08:16.564 ************************************ 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:08:16.564 * Looking for test storage... 00:08:16.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:08:16.564 21:14:50 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.467 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.467 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:08:18.467 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:18.467 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:18.467 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:18.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:18.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:18.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:18.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:18.468 21:14:52 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:18.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.115 ms 00:08:18.468 00:08:18.468 --- 10.0.0.2 ping statistics --- 00:08:18.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.468 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:18.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.085 ms 00:08:18.468 00:08:18.468 --- 10.0.0.1 ping statistics --- 00:08:18.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.468 rtt min/avg/max/mdev = 0.085/0.085/0.085/0.000 ms 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=797792 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 797792 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 797792 ']' 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.468 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:18.468 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.468 [2024-07-11 21:14:53.169385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:18.468 [2024-07-11 21:14:53.169456] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.469 EAL: No free 2048 kB hugepages reported on node 1 00:08:18.726 [2024-07-11 21:14:53.239660] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.726 [2024-07-11 21:14:53.336450] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.726 [2024-07-11 21:14:53.336518] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.726 [2024-07-11 21:14:53.336535] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.726 [2024-07-11 21:14:53.336555] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.726 [2024-07-11 21:14:53.336568] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.726 [2024-07-11 21:14:53.336664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.726 [2024-07-11 21:14:53.336722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.726 [2024-07-11 21:14:53.336766] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.726 [2024-07-11 21:14:53.336774] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.726 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 [2024-07-11 21:14:53.497520] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 Null1 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 [2024-07-11 21:14:53.537861] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 Null2 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.985 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 Null3 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 Null4 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:18.986 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:08:19.245 00:08:19.245 Discovery Log Number of Records 6, Generation counter 6 00:08:19.245 =====Discovery Log Entry 0====== 00:08:19.245 trtype: tcp 00:08:19.245 adrfam: ipv4 00:08:19.245 subtype: current discovery subsystem 00:08:19.245 treq: not required 00:08:19.245 portid: 0 00:08:19.245 trsvcid: 4420 00:08:19.245 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:19.245 traddr: 10.0.0.2 00:08:19.245 eflags: explicit discovery connections, duplicate discovery information 00:08:19.245 sectype: none 00:08:19.245 =====Discovery Log Entry 1====== 00:08:19.245 trtype: tcp 00:08:19.245 adrfam: ipv4 00:08:19.245 subtype: nvme subsystem 00:08:19.245 treq: not required 00:08:19.245 portid: 0 00:08:19.245 trsvcid: 4420 00:08:19.245 subnqn: nqn.2016-06.io.spdk:cnode1 00:08:19.245 traddr: 10.0.0.2 00:08:19.245 eflags: none 00:08:19.245 sectype: none 00:08:19.245 =====Discovery Log Entry 2====== 00:08:19.245 trtype: tcp 00:08:19.245 adrfam: ipv4 00:08:19.245 subtype: nvme subsystem 00:08:19.245 treq: not required 00:08:19.245 portid: 0 00:08:19.245 trsvcid: 4420 00:08:19.245 subnqn: nqn.2016-06.io.spdk:cnode2 00:08:19.245 traddr: 10.0.0.2 00:08:19.245 eflags: none 00:08:19.245 sectype: none 00:08:19.245 =====Discovery Log Entry 3====== 00:08:19.245 trtype: tcp 00:08:19.245 adrfam: ipv4 00:08:19.245 subtype: nvme subsystem 00:08:19.245 treq: not required 00:08:19.245 portid: 0 00:08:19.245 trsvcid: 4420 00:08:19.245 subnqn: nqn.2016-06.io.spdk:cnode3 00:08:19.245 traddr: 10.0.0.2 00:08:19.245 eflags: none 00:08:19.245 sectype: none 00:08:19.245 =====Discovery Log Entry 4====== 00:08:19.245 trtype: tcp 00:08:19.245 adrfam: ipv4 00:08:19.245 subtype: nvme subsystem 00:08:19.245 treq: not required 00:08:19.245 portid: 0 00:08:19.245 trsvcid: 4420 00:08:19.245 subnqn: nqn.2016-06.io.spdk:cnode4 00:08:19.245 traddr: 10.0.0.2 00:08:19.245 eflags: none 00:08:19.245 sectype: none 00:08:19.245 =====Discovery Log Entry 5====== 00:08:19.245 trtype: tcp 00:08:19.245 adrfam: ipv4 00:08:19.245 subtype: discovery subsystem referral 00:08:19.245 treq: not required 00:08:19.245 portid: 0 00:08:19.245 trsvcid: 4430 00:08:19.245 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:08:19.245 traddr: 10.0.0.2 00:08:19.245 eflags: none 00:08:19.245 sectype: none 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:08:19.245 Perform nvmf subsystem discovery via RPC 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.245 [ 00:08:19.245 { 00:08:19.245 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:08:19.245 "subtype": "Discovery", 00:08:19.245 "listen_addresses": [ 00:08:19.245 { 00:08:19.245 "trtype": "TCP", 00:08:19.245 "adrfam": "IPv4", 00:08:19.245 "traddr": "10.0.0.2", 00:08:19.245 "trsvcid": "4420" 00:08:19.245 } 00:08:19.245 ], 00:08:19.245 "allow_any_host": true, 00:08:19.245 "hosts": [] 00:08:19.245 }, 00:08:19.245 { 00:08:19.245 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:08:19.245 "subtype": "NVMe", 00:08:19.245 "listen_addresses": [ 00:08:19.245 { 00:08:19.245 "trtype": "TCP", 00:08:19.245 "adrfam": "IPv4", 00:08:19.245 "traddr": "10.0.0.2", 00:08:19.245 "trsvcid": "4420" 00:08:19.245 } 00:08:19.245 ], 00:08:19.245 "allow_any_host": true, 00:08:19.245 "hosts": [], 00:08:19.245 "serial_number": "SPDK00000000000001", 00:08:19.245 "model_number": "SPDK bdev Controller", 00:08:19.245 "max_namespaces": 32, 00:08:19.245 "min_cntlid": 1, 00:08:19.245 "max_cntlid": 65519, 00:08:19.245 "namespaces": [ 00:08:19.245 { 00:08:19.245 "nsid": 1, 00:08:19.245 "bdev_name": "Null1", 00:08:19.245 "name": "Null1", 00:08:19.245 "nguid": "2195E45B1D2B4BFD9F0EB66D4318FEEA", 00:08:19.245 "uuid": "2195e45b-1d2b-4bfd-9f0e-b66d4318feea" 00:08:19.245 } 00:08:19.245 ] 00:08:19.245 }, 00:08:19.245 { 00:08:19.245 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:19.245 "subtype": "NVMe", 00:08:19.245 "listen_addresses": [ 00:08:19.245 { 00:08:19.245 "trtype": "TCP", 00:08:19.245 "adrfam": "IPv4", 00:08:19.245 "traddr": "10.0.0.2", 00:08:19.245 "trsvcid": "4420" 00:08:19.245 } 00:08:19.245 ], 00:08:19.245 "allow_any_host": true, 00:08:19.245 "hosts": [], 00:08:19.245 "serial_number": "SPDK00000000000002", 00:08:19.245 "model_number": "SPDK bdev Controller", 00:08:19.245 "max_namespaces": 32, 00:08:19.245 "min_cntlid": 1, 00:08:19.245 "max_cntlid": 65519, 00:08:19.245 "namespaces": [ 00:08:19.245 { 00:08:19.245 "nsid": 1, 00:08:19.245 "bdev_name": "Null2", 00:08:19.245 "name": "Null2", 00:08:19.245 "nguid": "4EA3F0EAA3BC4B1D9196BF38575D7578", 00:08:19.245 "uuid": "4ea3f0ea-a3bc-4b1d-9196-bf38575d7578" 00:08:19.245 } 00:08:19.245 ] 00:08:19.245 }, 00:08:19.245 { 00:08:19.245 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:08:19.245 "subtype": "NVMe", 00:08:19.245 "listen_addresses": [ 00:08:19.245 { 00:08:19.245 "trtype": "TCP", 00:08:19.245 "adrfam": "IPv4", 00:08:19.245 "traddr": "10.0.0.2", 00:08:19.245 "trsvcid": "4420" 00:08:19.245 } 00:08:19.245 ], 00:08:19.245 "allow_any_host": true, 00:08:19.245 "hosts": [], 00:08:19.245 "serial_number": "SPDK00000000000003", 00:08:19.245 "model_number": "SPDK bdev Controller", 00:08:19.245 "max_namespaces": 32, 00:08:19.245 "min_cntlid": 1, 00:08:19.245 "max_cntlid": 65519, 00:08:19.245 "namespaces": [ 00:08:19.245 { 00:08:19.245 "nsid": 1, 00:08:19.245 "bdev_name": "Null3", 00:08:19.245 "name": "Null3", 00:08:19.245 "nguid": "14E7C87A1179418CB5AC1218A77CCFE3", 00:08:19.245 "uuid": "14e7c87a-1179-418c-b5ac-1218a77ccfe3" 00:08:19.245 } 00:08:19.245 ] 00:08:19.245 }, 00:08:19.245 { 00:08:19.245 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:08:19.245 "subtype": "NVMe", 00:08:19.245 "listen_addresses": [ 00:08:19.245 { 00:08:19.245 "trtype": "TCP", 00:08:19.245 "adrfam": "IPv4", 00:08:19.245 "traddr": "10.0.0.2", 00:08:19.245 "trsvcid": "4420" 00:08:19.245 } 00:08:19.245 ], 00:08:19.245 "allow_any_host": true, 00:08:19.245 "hosts": [], 00:08:19.245 "serial_number": "SPDK00000000000004", 00:08:19.245 "model_number": "SPDK bdev Controller", 00:08:19.245 "max_namespaces": 32, 00:08:19.245 "min_cntlid": 1, 00:08:19.245 "max_cntlid": 65519, 00:08:19.245 "namespaces": [ 00:08:19.245 { 00:08:19.245 "nsid": 1, 00:08:19.245 "bdev_name": "Null4", 00:08:19.245 "name": "Null4", 00:08:19.245 "nguid": "80F4CFBEB4E54BCFA88AC1BDEFC36895", 00:08:19.245 "uuid": "80f4cfbe-b4e5-4bcf-a88a-c1bdefc36895" 00:08:19.245 } 00:08:19.245 ] 00:08:19.245 } 00:08:19.245 ] 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.245 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:19.246 21:14:53 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:19.246 rmmod nvme_tcp 00:08:19.246 rmmod nvme_fabrics 00:08:19.246 rmmod nvme_keyring 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 797792 ']' 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 797792 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 797792 ']' 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 797792 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:19.246 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 797792 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 797792' 00:08:19.504 killing process with pid 797792 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 797792 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 797792 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:19.504 21:14:54 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.041 21:14:56 nvmf_tcp.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:22.041 00:08:22.041 real 0m5.465s 00:08:22.041 user 0m4.419s 00:08:22.041 sys 0m1.915s 00:08:22.041 21:14:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.041 21:14:56 nvmf_tcp.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:08:22.041 ************************************ 00:08:22.041 END TEST nvmf_target_discovery 00:08:22.041 ************************************ 00:08:22.041 21:14:56 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:22.041 21:14:56 nvmf_tcp -- nvmf/nvmf.sh@26 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:22.041 21:14:56 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:22.041 21:14:56 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.041 21:14:56 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:22.041 ************************************ 00:08:22.041 START TEST nvmf_referrals 00:08:22.041 ************************************ 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:08:22.041 * Looking for test storage... 00:08:22.041 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:08:22.041 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:08:22.042 21:14:56 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:23.944 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:23.944 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:23.944 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:23.944 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:23.944 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:23.945 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:23.945 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.269 ms 00:08:23.945 00:08:23.945 --- 10.0.0.2 ping statistics --- 00:08:23.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.945 rtt min/avg/max/mdev = 0.269/0.269/0.269/0.000 ms 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:23.945 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:23.945 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:08:23.945 00:08:23.945 --- 10.0.0.1 ping statistics --- 00:08:23.945 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:23.945 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=799876 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 799876 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 799876 ']' 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:23.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:23.945 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:23.945 [2024-07-11 21:14:58.640009] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:23.945 [2024-07-11 21:14:58.640102] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:23.945 EAL: No free 2048 kB hugepages reported on node 1 00:08:23.945 [2024-07-11 21:14:58.705695] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:24.203 [2024-07-11 21:14:58.796943] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.203 [2024-07-11 21:14:58.796993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.203 [2024-07-11 21:14:58.797024] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.203 [2024-07-11 21:14:58.797036] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.203 [2024-07-11 21:14:58.797046] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.203 [2024-07-11 21:14:58.797173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:24.203 [2024-07-11 21:14:58.797250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:24.203 [2024-07-11 21:14:58.797308] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:24.203 [2024-07-11 21:14:58.797311] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.203 [2024-07-11 21:14:58.945427] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.203 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.204 [2024-07-11 21:14:58.957625] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.204 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.463 21:14:58 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.463 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.721 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:24.722 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:24.980 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.239 21:14:59 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.497 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:25.761 rmmod nvme_tcp 00:08:25.761 rmmod nvme_fabrics 00:08:25.761 rmmod nvme_keyring 00:08:25.761 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 799876 ']' 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 799876 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 799876 ']' 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 799876 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 799876 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 799876' 00:08:26.079 killing process with pid 799876 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 799876 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 799876 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:26.079 21:15:00 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.609 21:15:02 nvmf_tcp.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:28.609 00:08:28.609 real 0m6.470s 00:08:28.609 user 0m9.216s 00:08:28.609 sys 0m2.101s 00:08:28.609 21:15:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.609 21:15:02 nvmf_tcp.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:08:28.609 ************************************ 00:08:28.609 END TEST nvmf_referrals 00:08:28.609 ************************************ 00:08:28.609 21:15:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:08:28.609 21:15:02 nvmf_tcp -- nvmf/nvmf.sh@27 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:28.609 21:15:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:28.609 21:15:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.609 21:15:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:08:28.609 ************************************ 00:08:28.609 START TEST nvmf_connect_disconnect 00:08:28.609 ************************************ 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:08:28.609 * Looking for test storage... 00:08:28.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:08:28.609 21:15:02 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.508 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.508 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:30.509 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:30.509 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:30.509 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:30.509 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.509 21:15:04 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.509 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.509 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:08:30.509 00:08:30.509 --- 10.0.0.2 ping statistics --- 00:08:30.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.509 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.509 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.509 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:30.509 00:08:30.509 --- 10.0.0.1 ping statistics --- 00:08:30.509 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.509 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=802287 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 802287 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 802287 ']' 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.509 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.509 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.510 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.510 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.510 [2024-07-11 21:15:05.136328] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:08:30.510 [2024-07-11 21:15:05.136411] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:30.510 EAL: No free 2048 kB hugepages reported on node 1 00:08:30.510 [2024-07-11 21:15:05.210797] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:30.768 [2024-07-11 21:15:05.306128] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:30.768 [2024-07-11 21:15:05.306184] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:30.768 [2024-07-11 21:15:05.306199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:30.768 [2024-07-11 21:15:05.306213] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:30.768 [2024-07-11 21:15:05.306225] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:30.768 [2024-07-11 21:15:05.306305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:30.768 [2024-07-11 21:15:05.306360] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:30.768 [2024-07-11 21:15:05.306412] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:30.768 [2024-07-11 21:15:05.306414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.768 [2024-07-11 21:15:05.475796] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:08:30.768 [2024-07-11 21:15:05.532660] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:08:30.768 21:15:05 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:08:33.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:35.817 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:37.709 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:40.229 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:42.752 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:44.648 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:47.174 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:49.070 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:51.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:54.175 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:56.069 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:58.590 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:01.114 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:03.009 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:05.536 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:08.060 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:09.962 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:12.486 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:15.037 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:16.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.460 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:21.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:23.881 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:26.407 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:28.930 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:30.822 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:33.346 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:35.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:37.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:40.342 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:42.862 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:44.755 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:47.278 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:49.801 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:51.695 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:54.220 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:56.743 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:01.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:03.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:05.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:08.148 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:12.564 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:15.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:17.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:20.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:22.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:24.559 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:27.115 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:29.031 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:31.549 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.073 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:35.966 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:38.496 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.400 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.932 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:45.471 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:47.377 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.912 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:52.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.357 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:56.895 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:58.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:01.329 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.234 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.772 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:12.798 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:15.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:17.235 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:19.765 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:21.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.203 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:26.135 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:28.666 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:30.569 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:33.107 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:35.676 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:37.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:40.113 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:42.649 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.087 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:49.621 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.527 [2024-07-11 21:18:26.156588] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2385af0 is same with the state(5) to be set 00:11:51.527 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.061 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:56.617 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:58.518 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.041 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:03.572 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:05.466 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:07.989 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:10.511 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:12.419 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:14.953 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:17.499 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.424 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.959 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:21.959 rmmod nvme_tcp 00:12:21.959 rmmod nvme_fabrics 00:12:21.959 rmmod nvme_keyring 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 802287 ']' 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 802287 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 802287 ']' 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 802287 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:21.959 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 802287 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 802287' 00:12:21.960 killing process with pid 802287 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 802287 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 802287 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:21.960 21:18:56 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.500 21:18:58 nvmf_tcp.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:24.500 00:12:24.500 real 3m55.879s 00:12:24.500 user 14m59.181s 00:12:24.500 sys 0m33.984s 00:12:24.500 21:18:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:24.500 21:18:58 nvmf_tcp.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:24.500 ************************************ 00:12:24.500 END TEST nvmf_connect_disconnect 00:12:24.500 ************************************ 00:12:24.500 21:18:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:24.500 21:18:58 nvmf_tcp -- nvmf/nvmf.sh@28 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.500 21:18:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:24.500 21:18:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:24.500 21:18:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:24.500 ************************************ 00:12:24.500 START TEST nvmf_multitarget 00:12:24.500 ************************************ 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:24.500 * Looking for test storage... 00:12:24.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:24.500 21:18:58 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:24.501 21:18:58 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:26.405 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:26.405 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:26.405 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:26.405 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:26.405 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.405 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:12:26.405 00:12:26.405 --- 10.0.0.2 ping statistics --- 00:12:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.405 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:26.405 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.405 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.079 ms 00:12:26.405 00:12:26.405 --- 10.0.0.1 ping statistics --- 00:12:26.405 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.405 rtt min/avg/max/mdev = 0.079/0.079/0.079/0.000 ms 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=833887 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 833887 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 833887 ']' 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.405 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:26.405 21:19:00 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.405 [2024-07-11 21:19:01.045315] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:26.405 [2024-07-11 21:19:01.045392] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.405 EAL: No free 2048 kB hugepages reported on node 1 00:12:26.405 [2024-07-11 21:19:01.117651] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:26.663 [2024-07-11 21:19:01.212483] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.663 [2024-07-11 21:19:01.212542] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.663 [2024-07-11 21:19:01.212559] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.663 [2024-07-11 21:19:01.212573] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.663 [2024-07-11 21:19:01.212585] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.663 [2024-07-11 21:19:01.212668] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.663 [2024-07-11 21:19:01.212722] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:26.663 [2024-07-11 21:19:01.212746] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:26.663 [2024-07-11 21:19:01.212749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:26.663 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:26.921 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:26.921 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:26.921 "nvmf_tgt_1" 00:12:26.921 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:27.181 "nvmf_tgt_2" 00:12:27.181 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.181 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:27.181 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:27.181 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:27.181 true 00:12:27.440 21:19:01 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:27.440 true 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:27.440 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:27.440 rmmod nvme_tcp 00:12:27.698 rmmod nvme_fabrics 00:12:27.698 rmmod nvme_keyring 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 833887 ']' 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 833887 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 833887 ']' 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 833887 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 833887 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 833887' 00:12:27.698 killing process with pid 833887 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 833887 00:12:27.698 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 833887 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:27.958 21:19:02 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:29.863 21:19:04 nvmf_tcp.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:29.863 00:12:29.863 real 0m5.736s 00:12:29.863 user 0m6.519s 00:12:29.863 sys 0m1.960s 00:12:29.863 21:19:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:29.863 21:19:04 nvmf_tcp.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:29.863 ************************************ 00:12:29.863 END TEST nvmf_multitarget 00:12:29.863 ************************************ 00:12:29.863 21:19:04 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:29.863 21:19:04 nvmf_tcp -- nvmf/nvmf.sh@29 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.863 21:19:04 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:29.863 21:19:04 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:29.863 21:19:04 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:29.864 ************************************ 00:12:29.864 START TEST nvmf_rpc 00:12:29.864 ************************************ 00:12:29.864 21:19:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:29.864 * Looking for test storage... 00:12:30.122 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:30.122 21:19:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:30.123 21:19:04 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:32.025 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:32.025 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.025 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:32.026 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:32.026 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:32.026 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:32.026 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:12:32.026 00:12:32.026 --- 10.0.0.2 ping statistics --- 00:12:32.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.026 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:32.026 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:32.026 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:12:32.026 00:12:32.026 --- 10.0.0.1 ping statistics --- 00:12:32.026 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:32.026 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=835982 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 835982 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 835982 ']' 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:32.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:32.026 21:19:06 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.026 [2024-07-11 21:19:06.756517] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:32.026 [2024-07-11 21:19:06.756603] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:32.026 EAL: No free 2048 kB hugepages reported on node 1 00:12:32.285 [2024-07-11 21:19:06.822910] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:32.285 [2024-07-11 21:19:06.914095] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:32.285 [2024-07-11 21:19:06.914161] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:32.285 [2024-07-11 21:19:06.914178] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:32.285 [2024-07-11 21:19:06.914191] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:32.285 [2024-07-11 21:19:06.914204] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:32.285 [2024-07-11 21:19:06.914288] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:32.285 [2024-07-11 21:19:06.914343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:32.285 [2024-07-11 21:19:06.914395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:32.285 [2024-07-11 21:19:06.914397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:32.285 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:32.285 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:12:32.285 21:19:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:32.285 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:32.285 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:32.546 "tick_rate": 2700000000, 00:12:32.546 "poll_groups": [ 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_000", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [] 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_001", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [] 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_002", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [] 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_003", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [] 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 }' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 [2024-07-11 21:19:07.170093] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:32.546 "tick_rate": 2700000000, 00:12:32.546 "poll_groups": [ 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_000", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [ 00:12:32.546 { 00:12:32.546 "trtype": "TCP" 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_001", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [ 00:12:32.546 { 00:12:32.546 "trtype": "TCP" 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_002", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [ 00:12:32.546 { 00:12:32.546 "trtype": "TCP" 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 }, 00:12:32.546 { 00:12:32.546 "name": "nvmf_tgt_poll_group_003", 00:12:32.546 "admin_qpairs": 0, 00:12:32.546 "io_qpairs": 0, 00:12:32.546 "current_admin_qpairs": 0, 00:12:32.546 "current_io_qpairs": 0, 00:12:32.546 "pending_bdev_io": 0, 00:12:32.546 "completed_nvme_io": 0, 00:12:32.546 "transports": [ 00:12:32.546 { 00:12:32.546 "trtype": "TCP" 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 } 00:12:32.546 ] 00:12:32.546 }' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:32.546 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.547 Malloc1 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.547 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.807 [2024-07-11 21:19:07.323575] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:12:32.807 [2024-07-11 21:19:07.346114] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:32.807 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:32.807 could not add new controller: failed to write to nvme-fabrics device 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:32.807 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:33.372 21:19:07 nvmf_tcp.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:33.372 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:33.372 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:33.372 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:33.372 21:19:07 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:35.277 21:19:09 nvmf_tcp.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:35.537 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:35.537 [2024-07-11 21:19:10.130042] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:12:35.537 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:35.537 could not add new controller: failed to write to nvme-fabrics device 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:35.537 21:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:36.104 21:19:10 nvmf_tcp.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:36.104 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:36.104 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:36.104 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:36.104 21:19:10 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:38.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.698 21:19:12 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 [2024-07-11 21:19:13.004645] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:38.698 21:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:38.956 21:19:13 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.956 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:38.956 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.956 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:38.956 21:19:13 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:41.491 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.491 [2024-07-11 21:19:15.813354] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:41.491 21:19:15 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:41.749 21:19:16 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:41.749 21:19:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:41.749 21:19:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.749 21:19:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:41.749 21:19:16 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.283 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.283 [2024-07-11 21:19:18.588498] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:44.283 21:19:18 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:44.543 21:19:19 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:44.543 21:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:44.543 21:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.543 21:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:44.543 21:19:19 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.078 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 [2024-07-11 21:19:21.430173] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:47.078 21:19:21 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:47.337 21:19:22 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:47.337 21:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:47.337 21:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:47.337 21:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:47.337 21:19:22 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:49.873 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.873 [2024-07-11 21:19:24.214840] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:49.873 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:50.133 21:19:24 nvmf_tcp.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:50.133 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:12:50.133 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:50.133 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:12:50.133 21:19:24 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:52.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 [2024-07-11 21:19:26.988681] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 21:19:26 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.667 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 [2024-07-11 21:19:27.036783] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 [2024-07-11 21:19:27.084955] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 [2024-07-11 21:19:27.133122] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 [2024-07-11 21:19:27.181280] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:52.668 "tick_rate": 2700000000, 00:12:52.668 "poll_groups": [ 00:12:52.668 { 00:12:52.668 "name": "nvmf_tgt_poll_group_000", 00:12:52.668 "admin_qpairs": 2, 00:12:52.668 "io_qpairs": 84, 00:12:52.668 "current_admin_qpairs": 0, 00:12:52.668 "current_io_qpairs": 0, 00:12:52.668 "pending_bdev_io": 0, 00:12:52.668 "completed_nvme_io": 232, 00:12:52.668 "transports": [ 00:12:52.668 { 00:12:52.668 "trtype": "TCP" 00:12:52.668 } 00:12:52.668 ] 00:12:52.668 }, 00:12:52.668 { 00:12:52.668 "name": "nvmf_tgt_poll_group_001", 00:12:52.668 "admin_qpairs": 2, 00:12:52.668 "io_qpairs": 84, 00:12:52.668 "current_admin_qpairs": 0, 00:12:52.668 "current_io_qpairs": 0, 00:12:52.668 "pending_bdev_io": 0, 00:12:52.668 "completed_nvme_io": 187, 00:12:52.668 "transports": [ 00:12:52.668 { 00:12:52.668 "trtype": "TCP" 00:12:52.668 } 00:12:52.668 ] 00:12:52.668 }, 00:12:52.668 { 00:12:52.668 "name": "nvmf_tgt_poll_group_002", 00:12:52.668 "admin_qpairs": 1, 00:12:52.668 "io_qpairs": 84, 00:12:52.668 "current_admin_qpairs": 0, 00:12:52.668 "current_io_qpairs": 0, 00:12:52.668 "pending_bdev_io": 0, 00:12:52.668 "completed_nvme_io": 85, 00:12:52.668 "transports": [ 00:12:52.668 { 00:12:52.668 "trtype": "TCP" 00:12:52.668 } 00:12:52.668 ] 00:12:52.668 }, 00:12:52.668 { 00:12:52.668 "name": "nvmf_tgt_poll_group_003", 00:12:52.668 "admin_qpairs": 2, 00:12:52.668 "io_qpairs": 84, 00:12:52.668 "current_admin_qpairs": 0, 00:12:52.668 "current_io_qpairs": 0, 00:12:52.668 "pending_bdev_io": 0, 00:12:52.668 "completed_nvme_io": 182, 00:12:52.668 "transports": [ 00:12:52.668 { 00:12:52.668 "trtype": "TCP" 00:12:52.668 } 00:12:52.668 ] 00:12:52.668 } 00:12:52.668 ] 00:12:52.668 }' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:52.668 rmmod nvme_tcp 00:12:52.668 rmmod nvme_fabrics 00:12:52.668 rmmod nvme_keyring 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 835982 ']' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 835982 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 835982 ']' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 835982 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 835982 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 835982' 00:12:52.668 killing process with pid 835982 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 835982 00:12:52.668 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 835982 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:52.926 21:19:27 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.458 21:19:29 nvmf_tcp.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.458 00:12:55.458 real 0m25.107s 00:12:55.458 user 1m21.996s 00:12:55.458 sys 0m4.033s 00:12:55.458 21:19:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.458 21:19:29 nvmf_tcp.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:55.458 ************************************ 00:12:55.458 END TEST nvmf_rpc 00:12:55.458 ************************************ 00:12:55.458 21:19:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:12:55.458 21:19:29 nvmf_tcp -- nvmf/nvmf.sh@30 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.458 21:19:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:55.458 21:19:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:55.458 21:19:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:12:55.458 ************************************ 00:12:55.458 START TEST nvmf_invalid 00:12:55.458 ************************************ 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:55.458 * Looking for test storage... 00:12:55.458 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:55.458 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:12:55.459 21:19:29 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:57.358 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:57.358 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.358 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:57.359 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:57.359 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:57.359 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:57.359 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:12:57.359 00:12:57.359 --- 10.0.0.2 ping statistics --- 00:12:57.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.359 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:57.359 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:57.359 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.161 ms 00:12:57.359 00:12:57.359 --- 10.0.0.1 ping statistics --- 00:12:57.359 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:57.359 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=840478 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 840478 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 840478 ']' 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.359 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:57.359 21:19:31 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.359 [2024-07-11 21:19:32.012515] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:12:57.359 [2024-07-11 21:19:32.012602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:57.359 EAL: No free 2048 kB hugepages reported on node 1 00:12:57.359 [2024-07-11 21:19:32.082874] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:57.617 [2024-07-11 21:19:32.177009] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:57.617 [2024-07-11 21:19:32.177060] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:57.617 [2024-07-11 21:19:32.177077] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:57.617 [2024-07-11 21:19:32.177090] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:57.617 [2024-07-11 21:19:32.177102] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:57.617 [2024-07-11 21:19:32.177183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:57.617 [2024-07-11 21:19:32.177238] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:57.617 [2024-07-11 21:19:32.177296] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:57.617 [2024-07-11 21:19:32.177299] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:57.617 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode29074 00:12:57.876 [2024-07-11 21:19:32.617487] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:57.876 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:57.876 { 00:12:57.876 "nqn": "nqn.2016-06.io.spdk:cnode29074", 00:12:57.876 "tgt_name": "foobar", 00:12:57.876 "method": "nvmf_create_subsystem", 00:12:57.876 "req_id": 1 00:12:57.876 } 00:12:57.876 Got JSON-RPC error response 00:12:57.876 response: 00:12:57.876 { 00:12:57.876 "code": -32603, 00:12:57.876 "message": "Unable to find target foobar" 00:12:57.876 }' 00:12:57.876 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:57.876 { 00:12:57.876 "nqn": "nqn.2016-06.io.spdk:cnode29074", 00:12:57.876 "tgt_name": "foobar", 00:12:57.876 "method": "nvmf_create_subsystem", 00:12:57.876 "req_id": 1 00:12:57.876 } 00:12:57.876 Got JSON-RPC error response 00:12:57.876 response: 00:12:57.876 { 00:12:57.876 "code": -32603, 00:12:57.876 "message": "Unable to find target foobar" 00:12:57.876 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:57.876 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:57.876 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode17712 00:12:58.135 [2024-07-11 21:19:32.890398] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17712: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:58.395 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:58.395 { 00:12:58.395 "nqn": "nqn.2016-06.io.spdk:cnode17712", 00:12:58.395 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:58.395 "method": "nvmf_create_subsystem", 00:12:58.395 "req_id": 1 00:12:58.395 } 00:12:58.395 Got JSON-RPC error response 00:12:58.395 response: 00:12:58.395 { 00:12:58.395 "code": -32602, 00:12:58.395 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:58.395 }' 00:12:58.395 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:58.395 { 00:12:58.395 "nqn": "nqn.2016-06.io.spdk:cnode17712", 00:12:58.395 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:58.395 "method": "nvmf_create_subsystem", 00:12:58.395 "req_id": 1 00:12:58.395 } 00:12:58.395 Got JSON-RPC error response 00:12:58.395 response: 00:12:58.395 { 00:12:58.395 "code": -32602, 00:12:58.395 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:58.395 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:58.395 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:58.395 21:19:32 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode31744 00:12:58.395 [2024-07-11 21:19:33.139210] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31744: invalid model number 'SPDK_Controller' 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:58.395 { 00:12:58.395 "nqn": "nqn.2016-06.io.spdk:cnode31744", 00:12:58.395 "model_number": "SPDK_Controller\u001f", 00:12:58.395 "method": "nvmf_create_subsystem", 00:12:58.395 "req_id": 1 00:12:58.395 } 00:12:58.395 Got JSON-RPC error response 00:12:58.395 response: 00:12:58.395 { 00:12:58.395 "code": -32602, 00:12:58.395 "message": "Invalid MN SPDK_Controller\u001f" 00:12:58.395 }' 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:58.395 { 00:12:58.395 "nqn": "nqn.2016-06.io.spdk:cnode31744", 00:12:58.395 "model_number": "SPDK_Controller\u001f", 00:12:58.395 "method": "nvmf_create_subsystem", 00:12:58.395 "req_id": 1 00:12:58.395 } 00:12:58.395 Got JSON-RPC error response 00:12:58.395 response: 00:12:58.395 { 00:12:58.395 "code": -32602, 00:12:58.395 "message": "Invalid MN SPDK_Controller\u001f" 00:12:58.395 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:58.395 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 91 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5b' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='[' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ * == \- ]] 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo '*0eyp.-:F[rC~Y87>ngB9' 00:12:58.655 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '*0eyp.-:F[rC~Y87>ngB9' nqn.2016-06.io.spdk:cnode22945 00:12:58.946 [2024-07-11 21:19:33.484414] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode22945: invalid serial number '*0eyp.-:F[rC~Y87>ngB9' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:58.946 { 00:12:58.946 "nqn": "nqn.2016-06.io.spdk:cnode22945", 00:12:58.946 "serial_number": "*0eyp.-:F[rC~Y87>ngB9", 00:12:58.946 "method": "nvmf_create_subsystem", 00:12:58.946 "req_id": 1 00:12:58.946 } 00:12:58.946 Got JSON-RPC error response 00:12:58.946 response: 00:12:58.946 { 00:12:58.946 "code": -32602, 00:12:58.946 "message": "Invalid SN *0eyp.-:F[rC~Y87>ngB9" 00:12:58.946 }' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:58.946 { 00:12:58.946 "nqn": "nqn.2016-06.io.spdk:cnode22945", 00:12:58.946 "serial_number": "*0eyp.-:F[rC~Y87>ngB9", 00:12:58.946 "method": "nvmf_create_subsystem", 00:12:58.946 "req_id": 1 00:12:58.946 } 00:12:58.946 Got JSON-RPC error response 00:12:58.946 response: 00:12:58.946 { 00:12:58.946 "code": -32602, 00:12:58.946 "message": "Invalid SN *0eyp.-:F[rC~Y87>ngB9" 00:12:58.946 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:12:58.946 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@28 -- # [[ R == \- ]] 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@31 -- # echo 'R]\%}W8e%yrhEFjkdF~M=$X-Tdy@(Q^rOw{O1@+X' 00:12:58.947 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'R]\%}W8e%yrhEFjkdF~M=$X-Tdy@(Q^rOw{O1@+X' nqn.2016-06.io.spdk:cnode747 00:12:59.205 [2024-07-11 21:19:33.881707] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode747: invalid model number 'R]\%}W8e%yrhEFjkdF~M=$X-Tdy@(Q^rOw{O1@+X' 00:12:59.205 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:59.205 { 00:12:59.205 "nqn": "nqn.2016-06.io.spdk:cnode747", 00:12:59.205 "model_number": "R]\\%}W8e%yrhEFjkd\u007fF~M=$X-Tdy@(Q^rOw{O1@+X", 00:12:59.205 "method": "nvmf_create_subsystem", 00:12:59.205 "req_id": 1 00:12:59.205 } 00:12:59.205 Got JSON-RPC error response 00:12:59.205 response: 00:12:59.205 { 00:12:59.205 "code": -32602, 00:12:59.205 "message": "Invalid MN R]\\%}W8e%yrhEFjkd\u007fF~M=$X-Tdy@(Q^rOw{O1@+X" 00:12:59.205 }' 00:12:59.205 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:59.205 { 00:12:59.205 "nqn": "nqn.2016-06.io.spdk:cnode747", 00:12:59.205 "model_number": "R]\\%}W8e%yrhEFjkd\u007fF~M=$X-Tdy@(Q^rOw{O1@+X", 00:12:59.205 "method": "nvmf_create_subsystem", 00:12:59.205 "req_id": 1 00:12:59.205 } 00:12:59.205 Got JSON-RPC error response 00:12:59.205 response: 00:12:59.205 { 00:12:59.205 "code": -32602, 00:12:59.205 "message": "Invalid MN R]\\%}W8e%yrhEFjkd\u007fF~M=$X-Tdy@(Q^rOw{O1@+X" 00:12:59.205 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:59.205 21:19:33 nvmf_tcp.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:59.464 [2024-07-11 21:19:34.142643] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:59.464 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:59.722 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:12:59.722 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:12:59.722 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:12:59.722 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:12:59.722 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:12:59.980 [2024-07-11 21:19:34.648230] nvmf_rpc.c: 804:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:59.980 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:12:59.980 { 00:12:59.980 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:59.980 "listen_address": { 00:12:59.980 "trtype": "tcp", 00:12:59.980 "traddr": "", 00:12:59.980 "trsvcid": "4421" 00:12:59.980 }, 00:12:59.980 "method": "nvmf_subsystem_remove_listener", 00:12:59.980 "req_id": 1 00:12:59.980 } 00:12:59.980 Got JSON-RPC error response 00:12:59.980 response: 00:12:59.980 { 00:12:59.980 "code": -32602, 00:12:59.980 "message": "Invalid parameters" 00:12:59.980 }' 00:12:59.980 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:12:59.980 { 00:12:59.980 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:59.980 "listen_address": { 00:12:59.980 "trtype": "tcp", 00:12:59.980 "traddr": "", 00:12:59.980 "trsvcid": "4421" 00:12:59.980 }, 00:12:59.980 "method": "nvmf_subsystem_remove_listener", 00:12:59.980 "req_id": 1 00:12:59.980 } 00:12:59.980 Got JSON-RPC error response 00:12:59.980 response: 00:12:59.980 { 00:12:59.980 "code": -32602, 00:12:59.980 "message": "Invalid parameters" 00:12:59.980 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:59.980 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4037 -i 0 00:13:00.238 [2024-07-11 21:19:34.893055] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4037: invalid cntlid range [0-65519] 00:13:00.238 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:13:00.238 { 00:13:00.238 "nqn": "nqn.2016-06.io.spdk:cnode4037", 00:13:00.238 "min_cntlid": 0, 00:13:00.238 "method": "nvmf_create_subsystem", 00:13:00.238 "req_id": 1 00:13:00.238 } 00:13:00.238 Got JSON-RPC error response 00:13:00.238 response: 00:13:00.238 { 00:13:00.238 "code": -32602, 00:13:00.238 "message": "Invalid cntlid range [0-65519]" 00:13:00.238 }' 00:13:00.238 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:13:00.238 { 00:13:00.238 "nqn": "nqn.2016-06.io.spdk:cnode4037", 00:13:00.238 "min_cntlid": 0, 00:13:00.238 "method": "nvmf_create_subsystem", 00:13:00.238 "req_id": 1 00:13:00.238 } 00:13:00.238 Got JSON-RPC error response 00:13:00.238 response: 00:13:00.238 { 00:13:00.238 "code": -32602, 00:13:00.238 "message": "Invalid cntlid range [0-65519]" 00:13:00.238 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:00.238 21:19:34 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode599 -i 65520 00:13:00.496 [2024-07-11 21:19:35.141869] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode599: invalid cntlid range [65520-65519] 00:13:00.496 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:13:00.496 { 00:13:00.496 "nqn": "nqn.2016-06.io.spdk:cnode599", 00:13:00.496 "min_cntlid": 65520, 00:13:00.496 "method": "nvmf_create_subsystem", 00:13:00.496 "req_id": 1 00:13:00.496 } 00:13:00.496 Got JSON-RPC error response 00:13:00.496 response: 00:13:00.496 { 00:13:00.496 "code": -32602, 00:13:00.496 "message": "Invalid cntlid range [65520-65519]" 00:13:00.496 }' 00:13:00.496 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:13:00.496 { 00:13:00.496 "nqn": "nqn.2016-06.io.spdk:cnode599", 00:13:00.496 "min_cntlid": 65520, 00:13:00.496 "method": "nvmf_create_subsystem", 00:13:00.496 "req_id": 1 00:13:00.496 } 00:13:00.496 Got JSON-RPC error response 00:13:00.496 response: 00:13:00.496 { 00:13:00.496 "code": -32602, 00:13:00.496 "message": "Invalid cntlid range [65520-65519]" 00:13:00.496 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:00.496 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode31105 -I 0 00:13:00.754 [2024-07-11 21:19:35.382654] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode31105: invalid cntlid range [1-0] 00:13:00.754 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:13:00.755 { 00:13:00.755 "nqn": "nqn.2016-06.io.spdk:cnode31105", 00:13:00.755 "max_cntlid": 0, 00:13:00.755 "method": "nvmf_create_subsystem", 00:13:00.755 "req_id": 1 00:13:00.755 } 00:13:00.755 Got JSON-RPC error response 00:13:00.755 response: 00:13:00.755 { 00:13:00.755 "code": -32602, 00:13:00.755 "message": "Invalid cntlid range [1-0]" 00:13:00.755 }' 00:13:00.755 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:13:00.755 { 00:13:00.755 "nqn": "nqn.2016-06.io.spdk:cnode31105", 00:13:00.755 "max_cntlid": 0, 00:13:00.755 "method": "nvmf_create_subsystem", 00:13:00.755 "req_id": 1 00:13:00.755 } 00:13:00.755 Got JSON-RPC error response 00:13:00.755 response: 00:13:00.755 { 00:13:00.755 "code": -32602, 00:13:00.755 "message": "Invalid cntlid range [1-0]" 00:13:00.755 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:00.755 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11576 -I 65520 00:13:01.012 [2024-07-11 21:19:35.651576] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode11576: invalid cntlid range [1-65520] 00:13:01.012 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:13:01.012 { 00:13:01.012 "nqn": "nqn.2016-06.io.spdk:cnode11576", 00:13:01.012 "max_cntlid": 65520, 00:13:01.012 "method": "nvmf_create_subsystem", 00:13:01.012 "req_id": 1 00:13:01.012 } 00:13:01.012 Got JSON-RPC error response 00:13:01.012 response: 00:13:01.012 { 00:13:01.012 "code": -32602, 00:13:01.012 "message": "Invalid cntlid range [1-65520]" 00:13:01.012 }' 00:13:01.012 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:13:01.012 { 00:13:01.013 "nqn": "nqn.2016-06.io.spdk:cnode11576", 00:13:01.013 "max_cntlid": 65520, 00:13:01.013 "method": "nvmf_create_subsystem", 00:13:01.013 "req_id": 1 00:13:01.013 } 00:13:01.013 Got JSON-RPC error response 00:13:01.013 response: 00:13:01.013 { 00:13:01.013 "code": -32602, 00:13:01.013 "message": "Invalid cntlid range [1-65520]" 00:13:01.013 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:01.013 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode26414 -i 6 -I 5 00:13:01.272 [2024-07-11 21:19:35.892358] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode26414: invalid cntlid range [6-5] 00:13:01.272 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:13:01.272 { 00:13:01.272 "nqn": "nqn.2016-06.io.spdk:cnode26414", 00:13:01.272 "min_cntlid": 6, 00:13:01.272 "max_cntlid": 5, 00:13:01.272 "method": "nvmf_create_subsystem", 00:13:01.272 "req_id": 1 00:13:01.272 } 00:13:01.272 Got JSON-RPC error response 00:13:01.272 response: 00:13:01.272 { 00:13:01.272 "code": -32602, 00:13:01.272 "message": "Invalid cntlid range [6-5]" 00:13:01.272 }' 00:13:01.272 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:13:01.272 { 00:13:01.272 "nqn": "nqn.2016-06.io.spdk:cnode26414", 00:13:01.272 "min_cntlid": 6, 00:13:01.272 "max_cntlid": 5, 00:13:01.272 "method": "nvmf_create_subsystem", 00:13:01.272 "req_id": 1 00:13:01.272 } 00:13:01.272 Got JSON-RPC error response 00:13:01.272 response: 00:13:01.272 { 00:13:01.272 "code": -32602, 00:13:01.272 "message": "Invalid cntlid range [6-5]" 00:13:01.272 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:13:01.272 21:19:35 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:13:01.272 { 00:13:01.272 "name": "foobar", 00:13:01.272 "method": "nvmf_delete_target", 00:13:01.272 "req_id": 1 00:13:01.272 } 00:13:01.272 Got JSON-RPC error response 00:13:01.272 response: 00:13:01.272 { 00:13:01.272 "code": -32602, 00:13:01.272 "message": "The specified target doesn'\''t exist, cannot delete it." 00:13:01.272 }' 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:13:01.272 { 00:13:01.272 "name": "foobar", 00:13:01.272 "method": "nvmf_delete_target", 00:13:01.272 "req_id": 1 00:13:01.272 } 00:13:01.272 Got JSON-RPC error response 00:13:01.272 response: 00:13:01.272 { 00:13:01.272 "code": -32602, 00:13:01.272 "message": "The specified target doesn't exist, cannot delete it." 00:13:01.272 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:01.272 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:01.272 rmmod nvme_tcp 00:13:01.530 rmmod nvme_fabrics 00:13:01.530 rmmod nvme_keyring 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 840478 ']' 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 840478 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@948 -- # '[' -z 840478 ']' 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@952 -- # kill -0 840478 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # uname 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 840478 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:01.530 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:01.531 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@966 -- # echo 'killing process with pid 840478' 00:13:01.531 killing process with pid 840478 00:13:01.531 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@967 -- # kill 840478 00:13:01.531 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@972 -- # wait 840478 00:13:01.790 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:01.790 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:01.790 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:01.790 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:01.791 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:01.791 21:19:36 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:01.791 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:01.791 21:19:36 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.697 21:19:38 nvmf_tcp.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:03.697 00:13:03.697 real 0m8.646s 00:13:03.697 user 0m20.289s 00:13:03.697 sys 0m2.426s 00:13:03.697 21:19:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:03.697 21:19:38 nvmf_tcp.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:03.697 ************************************ 00:13:03.697 END TEST nvmf_invalid 00:13:03.697 ************************************ 00:13:03.697 21:19:38 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:03.697 21:19:38 nvmf_tcp -- nvmf/nvmf.sh@31 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:03.697 21:19:38 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:03.697 21:19:38 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:03.697 21:19:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:03.697 ************************************ 00:13:03.697 START TEST nvmf_abort 00:13:03.697 ************************************ 00:13:03.697 21:19:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:13:03.957 * Looking for test storage... 00:13:03.957 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:13:03.957 21:19:38 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:13:05.858 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:05.859 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:05.859 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:05.859 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:05.859 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:05.859 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:06.119 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:06.119 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.252 ms 00:13:06.119 00:13:06.119 --- 10.0.0.2 ping statistics --- 00:13:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.119 rtt min/avg/max/mdev = 0.252/0.252/0.252/0.000 ms 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:06.119 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:06.119 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:13:06.119 00:13:06.119 --- 10.0.0.1 ping statistics --- 00:13:06.119 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:06.119 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=843103 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 843103 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 843103 ']' 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:06.119 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:06.119 21:19:40 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.119 [2024-07-11 21:19:40.739655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:06.119 [2024-07-11 21:19:40.739750] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:06.119 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.119 [2024-07-11 21:19:40.809056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:06.377 [2024-07-11 21:19:40.897395] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:06.377 [2024-07-11 21:19:40.897447] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:06.377 [2024-07-11 21:19:40.897476] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:06.377 [2024-07-11 21:19:40.897488] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:06.377 [2024-07-11 21:19:40.897499] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:06.377 [2024-07-11 21:19:40.897566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:06.377 [2024-07-11 21:19:40.897647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:06.377 [2024-07-11 21:19:40.897650] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 [2024-07-11 21:19:41.037890] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 Malloc0 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 Delay0 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 [2024-07-11 21:19:41.115902] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.377 21:19:41 nvmf_tcp.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:13:06.637 EAL: No free 2048 kB hugepages reported on node 1 00:13:06.637 [2024-07-11 21:19:41.262862] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:13:09.170 Initializing NVMe Controllers 00:13:09.170 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:13:09.170 controller IO queue size 128 less than required 00:13:09.170 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:13:09.170 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:13:09.170 Initialization complete. Launching workers. 00:13:09.170 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 30199 00:13:09.170 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 30260, failed to submit 62 00:13:09.170 success 30203, unsuccess 57, failed 0 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:09.170 rmmod nvme_tcp 00:13:09.170 rmmod nvme_fabrics 00:13:09.170 rmmod nvme_keyring 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 843103 ']' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 843103 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 843103 ']' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 843103 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 843103 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 843103' 00:13:09.170 killing process with pid 843103 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@967 -- # kill 843103 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@972 -- # wait 843103 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:09.170 21:19:43 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.074 21:19:45 nvmf_tcp.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:11.074 00:13:11.074 real 0m7.300s 00:13:11.074 user 0m10.520s 00:13:11.074 sys 0m2.597s 00:13:11.074 21:19:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:11.074 21:19:45 nvmf_tcp.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:13:11.074 ************************************ 00:13:11.074 END TEST nvmf_abort 00:13:11.074 ************************************ 00:13:11.074 21:19:45 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:11.074 21:19:45 nvmf_tcp -- nvmf/nvmf.sh@32 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:11.074 21:19:45 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:11.074 21:19:45 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:11.074 21:19:45 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:11.074 ************************************ 00:13:11.074 START TEST nvmf_ns_hotplug_stress 00:13:11.074 ************************************ 00:13:11.074 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:13:11.074 * Looking for test storage... 00:13:11.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:11.332 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:11.333 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:11.333 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:11.333 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:11.333 21:19:45 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:13.233 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:13.233 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:13.233 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:13.233 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:13.233 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:13.234 21:19:47 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:13.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:13.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:13:13.492 00:13:13.492 --- 10.0.0.2 ping statistics --- 00:13:13.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.492 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:13.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:13.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.064 ms 00:13:13.492 00:13:13.492 --- 10.0.0.1 ping statistics --- 00:13:13.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:13.492 rtt min/avg/max/mdev = 0.064/0.064/0.064/0.000 ms 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=845374 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 845374 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 845374 ']' 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:13.492 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:13.492 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.492 [2024-07-11 21:19:48.133503] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:13:13.492 [2024-07-11 21:19:48.133578] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:13.492 EAL: No free 2048 kB hugepages reported on node 1 00:13:13.492 [2024-07-11 21:19:48.199117] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:13.749 [2024-07-11 21:19:48.284505] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:13.749 [2024-07-11 21:19:48.284559] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:13.749 [2024-07-11 21:19:48.284587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:13.749 [2024-07-11 21:19:48.284598] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:13.749 [2024-07-11 21:19:48.284608] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:13.749 [2024-07-11 21:19:48.284688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:13.749 [2024-07-11 21:19:48.284759] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:13.749 [2024-07-11 21:19:48.284761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:13:13.749 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:14.006 [2024-07-11 21:19:48.623246] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:14.006 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:14.264 21:19:48 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:14.522 [2024-07-11 21:19:49.110042] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:14.522 21:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:14.779 21:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:13:15.037 Malloc0 00:13:15.037 21:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:13:15.294 Delay0 00:13:15.294 21:19:49 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:15.552 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:13:15.815 NULL1 00:13:15.815 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:16.117 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=845739 00:13:16.117 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:13:16.117 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:16.117 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:16.117 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.374 21:19:50 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:16.632 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:13:16.632 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:13:16.889 true 00:13:16.889 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:16.889 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:17.146 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:17.404 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:13:17.404 21:19:51 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:13:17.661 true 00:13:17.661 21:19:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:17.661 21:19:52 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:18.598 Read completed with error (sct=0, sc=11) 00:13:18.598 21:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:18.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:18.598 21:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:13:18.598 21:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:13:18.856 true 00:13:18.856 21:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:18.856 21:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:19.114 21:19:53 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:19.371 21:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:13:19.371 21:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:13:19.629 true 00:13:19.629 21:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:19.629 21:19:54 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:20.566 21:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:20.824 21:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:13:20.824 21:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:13:21.082 true 00:13:21.082 21:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:21.082 21:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:21.340 21:19:55 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:21.598 21:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:13:21.598 21:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:13:21.857 true 00:13:21.857 21:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:21.857 21:19:56 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:22.794 21:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:22.794 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:22.794 21:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:13:22.794 21:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:13:23.052 true 00:13:23.052 21:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:23.052 21:19:57 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:23.310 21:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:23.569 21:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:13:23.569 21:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:13:23.826 true 00:13:23.826 21:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:23.826 21:19:58 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:24.762 21:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:24.762 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:25.022 21:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:13:25.022 21:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:13:25.022 true 00:13:25.022 21:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:25.022 21:19:59 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.280 21:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:25.537 21:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:13:25.537 21:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:13:25.795 true 00:13:25.795 21:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:25.795 21:20:00 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.171 21:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.171 21:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:13:27.171 21:20:01 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:13:27.429 true 00:13:27.429 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:27.429 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:27.687 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:27.945 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:13:27.945 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:13:28.203 true 00:13:28.203 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:28.203 21:20:02 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.145 21:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.145 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:29.145 21:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:13:29.145 21:20:03 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:13:29.402 true 00:13:29.402 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:29.402 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:29.968 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:29.968 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:13:29.968 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:13:30.225 true 00:13:30.225 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:30.225 21:20:04 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.168 21:20:05 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:31.475 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:13:31.475 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:13:31.732 true 00:13:31.732 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:31.732 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:31.990 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:32.248 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:13:32.248 21:20:06 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:13:32.506 true 00:13:32.506 21:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:32.506 21:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.073 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.331 21:20:07 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:33.331 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:33.589 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:13:33.589 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:13:33.589 true 00:13:33.847 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:33.847 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:33.847 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:34.105 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:13:34.105 21:20:08 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:13:34.363 true 00:13:34.363 21:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:34.363 21:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.301 21:20:09 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:35.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:35.559 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:13:35.559 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:13:35.816 true 00:13:35.816 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:35.816 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:36.075 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:36.333 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:13:36.333 21:20:10 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:13:36.590 true 00:13:36.590 21:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:36.590 21:20:11 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.528 21:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:37.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.528 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:37.785 21:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:13:37.785 21:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:13:38.043 true 00:13:38.043 21:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:38.043 21:20:12 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:38.300 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:38.557 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:13:38.557 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:13:38.814 true 00:13:38.814 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:38.814 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:39.071 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:39.328 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:13:39.328 21:20:13 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:13:39.585 true 00:13:39.585 21:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:39.585 21:20:14 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:40.520 21:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:40.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.520 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:40.778 21:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:13:40.778 21:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:13:41.034 true 00:13:41.034 21:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:41.034 21:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:41.291 21:20:15 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:41.548 21:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:13:41.548 21:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:13:41.806 true 00:13:41.806 21:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:41.806 21:20:16 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:42.741 21:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:42.741 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:42.998 21:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:13:42.998 21:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:13:43.256 true 00:13:43.256 21:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:43.256 21:20:17 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:43.514 21:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:43.772 21:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:13:43.772 21:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:13:44.030 true 00:13:44.030 21:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:44.030 21:20:18 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:44.966 21:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:44.966 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:13:45.224 21:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:13:45.224 21:20:19 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:13:45.482 true 00:13:45.482 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:45.482 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:45.741 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:46.033 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:13:46.033 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:13:46.291 true 00:13:46.291 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:46.291 21:20:20 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.225 21:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:13:47.225 Initializing NVMe Controllers 00:13:47.225 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:47.225 Controller IO queue size 128, less than required. 00:13:47.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.225 Controller IO queue size 128, less than required. 00:13:47.225 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:13:47.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:13:47.226 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:13:47.226 Initialization complete. Launching workers. 00:13:47.226 ======================================================== 00:13:47.226 Latency(us) 00:13:47.226 Device Information : IOPS MiB/s Average min max 00:13:47.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 705.07 0.34 94261.06 3295.52 1020185.17 00:13:47.226 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 10569.57 5.16 12074.47 3697.36 546280.89 00:13:47.226 ======================================================== 00:13:47.226 Total : 11274.63 5.51 17214.06 3295.52 1020185.17 00:13:47.226 00:13:47.226 21:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:13:47.226 21:20:21 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:13:47.483 true 00:13:47.483 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 845739 00:13:47.483 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (845739) - No such process 00:13:47.483 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 845739 00:13:47.483 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.740 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.998 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:13:47.998 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:13:47.998 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:13:47.998 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:47.998 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:13:48.256 null0 00:13:48.256 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.256 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.256 21:20:22 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:13:48.514 null1 00:13:48.514 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.514 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.514 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:13:48.771 null2 00:13:48.771 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:48.771 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:48.771 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:13:49.029 null3 00:13:49.029 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.029 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.029 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:13:49.287 null4 00:13:49.287 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.287 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.287 21:20:23 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:13:49.544 null5 00:13:49.544 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.544 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.544 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:13:49.801 null6 00:13:49.801 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:49.801 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:49.801 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:13:50.059 null7 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:13:50.059 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 849799 849800 849802 849804 849806 849808 849810 849812 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.060 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.317 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.318 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.318 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.318 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.318 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.318 21:20:24 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.318 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.318 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:50.575 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:50.833 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.091 21:20:25 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.349 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:51.349 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:51.349 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.349 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:51.349 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:51.349 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:51.350 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:51.350 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:51.608 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:51.867 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:51.867 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.125 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.125 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.125 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.125 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.125 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.125 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.383 21:20:26 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:52.642 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:52.901 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.159 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:27 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.417 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:53.676 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:53.934 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.934 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.934 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:53.934 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.934 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.934 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:53.935 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.194 21:20:28 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.452 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.453 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:54.711 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:54.970 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:55.228 21:20:29 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:55.487 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:55.487 rmmod nvme_tcp 00:13:55.487 rmmod nvme_fabrics 00:13:55.487 rmmod nvme_keyring 00:13:55.745 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:55.745 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:13:55.745 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:13:55.745 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 845374 ']' 00:13:55.745 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 845374 00:13:55.745 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 845374 ']' 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 845374 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 845374 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 845374' 00:13:55.746 killing process with pid 845374 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 845374 00:13:55.746 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 845374 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:56.004 21:20:30 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:57.903 21:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:57.903 00:13:57.903 real 0m46.787s 00:13:57.903 user 3m33.339s 00:13:57.903 sys 0m16.367s 00:13:57.903 21:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:57.903 21:20:32 nvmf_tcp.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:13:57.903 ************************************ 00:13:57.903 END TEST nvmf_ns_hotplug_stress 00:13:57.903 ************************************ 00:13:57.903 21:20:32 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:13:57.903 21:20:32 nvmf_tcp -- nvmf/nvmf.sh@33 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:57.903 21:20:32 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:57.903 21:20:32 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:57.903 21:20:32 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:13:57.903 ************************************ 00:13:57.903 START TEST nvmf_connect_stress 00:13:57.903 ************************************ 00:13:57.903 21:20:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:58.161 * Looking for test storage... 00:13:58.161 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:58.161 21:20:32 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:00.059 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:00.059 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:00.059 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:00.059 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:00.059 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:00.059 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:14:00.059 00:14:00.059 --- 10.0.0.2 ping statistics --- 00:14:00.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.059 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:00.059 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:00.059 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:14:00.059 00:14:00.059 --- 10.0.0.1 ping statistics --- 00:14:00.059 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:00.059 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:00.059 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=852553 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 852553 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 852553 ']' 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:00.060 21:20:34 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.317 [2024-07-11 21:20:34.849539] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:00.317 [2024-07-11 21:20:34.849618] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.317 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.317 [2024-07-11 21:20:34.915404] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:00.317 [2024-07-11 21:20:35.004248] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.317 [2024-07-11 21:20:35.004312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.317 [2024-07-11 21:20:35.004326] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.317 [2024-07-11 21:20:35.004336] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.317 [2024-07-11 21:20:35.004346] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.317 [2024-07-11 21:20:35.004427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.317 [2024-07-11 21:20:35.004503] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:00.317 [2024-07-11 21:20:35.004505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.575 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.575 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:14:00.575 21:20:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:00.575 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:00.575 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.575 21:20:35 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.576 [2024-07-11 21:20:35.145496] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.576 [2024-07-11 21:20:35.183927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.576 NULL1 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=852691 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 EAL: No free 2048 kB hugepages reported on node 1 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.576 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:00.868 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:00.868 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:00.868 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:00.868 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:00.868 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.127 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.127 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:01.127 21:20:35 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.127 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.127 21:20:35 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.691 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.691 21:20:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:01.691 21:20:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.691 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.691 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:01.949 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:01.949 21:20:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:01.949 21:20:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:01.949 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:01.949 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.206 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.206 21:20:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:02.206 21:20:36 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.206 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.206 21:20:36 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:02.463 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:02.463 21:20:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:02.463 21:20:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:02.463 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:02.463 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.028 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.028 21:20:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:03.028 21:20:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.028 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.028 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.285 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.285 21:20:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:03.285 21:20:37 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.285 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.285 21:20:37 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.542 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.542 21:20:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:03.542 21:20:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.542 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.542 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:03.799 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:03.799 21:20:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:03.799 21:20:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:03.799 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:03.799 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.056 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.056 21:20:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:04.056 21:20:38 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.056 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.056 21:20:38 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.621 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.621 21:20:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:04.621 21:20:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.621 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.621 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:04.878 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:04.878 21:20:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:04.878 21:20:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:04.878 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:04.878 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.135 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.135 21:20:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:05.135 21:20:39 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.135 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.135 21:20:39 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.393 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.393 21:20:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:05.393 21:20:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.393 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.393 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:05.650 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.650 21:20:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:05.650 21:20:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:05.650 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.650 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.214 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.214 21:20:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:06.214 21:20:40 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.214 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.214 21:20:40 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.471 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.471 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:06.471 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.471 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.471 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.729 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.729 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:06.729 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.729 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.729 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:06.986 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:06.986 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:06.986 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:06.986 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:06.986 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.243 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.243 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:07.243 21:20:41 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.243 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.243 21:20:41 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:07.807 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:07.807 21:20:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:07.807 21:20:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:07.807 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:07.807 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.064 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.064 21:20:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:08.064 21:20:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.064 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.064 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.321 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.321 21:20:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:08.321 21:20:42 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.321 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.321 21:20:42 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.576 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.576 21:20:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:08.576 21:20:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.576 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.576 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:08.833 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:08.833 21:20:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:08.833 21:20:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:08.833 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:08.833 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.396 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.396 21:20:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:09.396 21:20:43 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.396 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.396 21:20:43 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.653 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.653 21:20:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:09.653 21:20:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.653 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.653 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:09.909 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:09.909 21:20:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:09.909 21:20:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:09.909 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:09.909 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.165 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.165 21:20:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:10.165 21:20:44 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.165 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.165 21:20:44 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.728 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:10.728 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:14:10.728 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:10.728 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:10.728 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 852691 00:14:10.986 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (852691) - No such process 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 852691 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.986 rmmod nvme_tcp 00:14:10.986 rmmod nvme_fabrics 00:14:10.986 rmmod nvme_keyring 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 852553 ']' 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 852553 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 852553 ']' 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 852553 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 852553 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 852553' 00:14:10.986 killing process with pid 852553 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 852553 00:14:10.986 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 852553 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:11.244 21:20:45 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.145 21:20:47 nvmf_tcp.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.145 00:14:13.145 real 0m15.260s 00:14:13.145 user 0m38.357s 00:14:13.145 sys 0m5.844s 00:14:13.145 21:20:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.145 21:20:47 nvmf_tcp.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:14:13.145 ************************************ 00:14:13.145 END TEST nvmf_connect_stress 00:14:13.145 ************************************ 00:14:13.145 21:20:47 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:13.145 21:20:47 nvmf_tcp -- nvmf/nvmf.sh@34 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:13.145 21:20:47 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.145 21:20:47 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.145 21:20:47 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:13.403 ************************************ 00:14:13.403 START TEST nvmf_fused_ordering 00:14:13.403 ************************************ 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:14:13.403 * Looking for test storage... 00:14:13.403 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.403 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.404 21:20:47 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:15.305 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:15.305 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:15.305 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:15.305 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:15.305 21:20:49 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:15.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:15.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:14:15.305 00:14:15.305 --- 10.0.0.2 ping statistics --- 00:14:15.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.305 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:15.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:15.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:14:15.305 00:14:15.305 --- 10.0.0.1 ping statistics --- 00:14:15.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:15.305 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=855837 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 855837 00:14:15.305 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 855837 ']' 00:14:15.306 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:15.306 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:15.306 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:15.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:15.306 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:15.306 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.564 [2024-07-11 21:20:50.107323] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:15.564 [2024-07-11 21:20:50.107412] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:15.564 EAL: No free 2048 kB hugepages reported on node 1 00:14:15.564 [2024-07-11 21:20:50.176528] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:15.564 [2024-07-11 21:20:50.265810] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:15.564 [2024-07-11 21:20:50.265872] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:15.564 [2024-07-11 21:20:50.265888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:15.564 [2024-07-11 21:20:50.265902] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:15.564 [2024-07-11 21:20:50.265914] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:15.564 [2024-07-11 21:20:50.265950] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 [2024-07-11 21:20:50.415058] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 [2024-07-11 21:20:50.431258] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 NULL1 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:15.822 21:20:50 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:15.822 [2024-07-11 21:20:50.476568] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:15.822 [2024-07-11 21:20:50.476613] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid855857 ] 00:14:15.822 EAL: No free 2048 kB hugepages reported on node 1 00:14:16.421 Attached to nqn.2016-06.io.spdk:cnode1 00:14:16.421 Namespace ID: 1 size: 1GB 00:14:16.421 fused_ordering(0) 00:14:16.421 fused_ordering(1) 00:14:16.421 fused_ordering(2) 00:14:16.421 fused_ordering(3) 00:14:16.421 fused_ordering(4) 00:14:16.421 fused_ordering(5) 00:14:16.421 fused_ordering(6) 00:14:16.421 fused_ordering(7) 00:14:16.421 fused_ordering(8) 00:14:16.421 fused_ordering(9) 00:14:16.421 fused_ordering(10) 00:14:16.421 fused_ordering(11) 00:14:16.421 fused_ordering(12) 00:14:16.421 fused_ordering(13) 00:14:16.421 fused_ordering(14) 00:14:16.421 fused_ordering(15) 00:14:16.421 fused_ordering(16) 00:14:16.421 fused_ordering(17) 00:14:16.421 fused_ordering(18) 00:14:16.421 fused_ordering(19) 00:14:16.421 fused_ordering(20) 00:14:16.421 fused_ordering(21) 00:14:16.421 fused_ordering(22) 00:14:16.421 fused_ordering(23) 00:14:16.421 fused_ordering(24) 00:14:16.421 fused_ordering(25) 00:14:16.421 fused_ordering(26) 00:14:16.421 fused_ordering(27) 00:14:16.421 fused_ordering(28) 00:14:16.421 fused_ordering(29) 00:14:16.421 fused_ordering(30) 00:14:16.421 fused_ordering(31) 00:14:16.421 fused_ordering(32) 00:14:16.421 fused_ordering(33) 00:14:16.421 fused_ordering(34) 00:14:16.421 fused_ordering(35) 00:14:16.421 fused_ordering(36) 00:14:16.421 fused_ordering(37) 00:14:16.421 fused_ordering(38) 00:14:16.421 fused_ordering(39) 00:14:16.421 fused_ordering(40) 00:14:16.421 fused_ordering(41) 00:14:16.421 fused_ordering(42) 00:14:16.421 fused_ordering(43) 00:14:16.421 fused_ordering(44) 00:14:16.421 fused_ordering(45) 00:14:16.421 fused_ordering(46) 00:14:16.421 fused_ordering(47) 00:14:16.421 fused_ordering(48) 00:14:16.421 fused_ordering(49) 00:14:16.421 fused_ordering(50) 00:14:16.421 fused_ordering(51) 00:14:16.421 fused_ordering(52) 00:14:16.421 fused_ordering(53) 00:14:16.421 fused_ordering(54) 00:14:16.421 fused_ordering(55) 00:14:16.421 fused_ordering(56) 00:14:16.421 fused_ordering(57) 00:14:16.421 fused_ordering(58) 00:14:16.421 fused_ordering(59) 00:14:16.421 fused_ordering(60) 00:14:16.421 fused_ordering(61) 00:14:16.421 fused_ordering(62) 00:14:16.421 fused_ordering(63) 00:14:16.421 fused_ordering(64) 00:14:16.421 fused_ordering(65) 00:14:16.421 fused_ordering(66) 00:14:16.421 fused_ordering(67) 00:14:16.421 fused_ordering(68) 00:14:16.421 fused_ordering(69) 00:14:16.421 fused_ordering(70) 00:14:16.421 fused_ordering(71) 00:14:16.421 fused_ordering(72) 00:14:16.421 fused_ordering(73) 00:14:16.421 fused_ordering(74) 00:14:16.421 fused_ordering(75) 00:14:16.421 fused_ordering(76) 00:14:16.421 fused_ordering(77) 00:14:16.421 fused_ordering(78) 00:14:16.421 fused_ordering(79) 00:14:16.421 fused_ordering(80) 00:14:16.421 fused_ordering(81) 00:14:16.421 fused_ordering(82) 00:14:16.421 fused_ordering(83) 00:14:16.421 fused_ordering(84) 00:14:16.421 fused_ordering(85) 00:14:16.421 fused_ordering(86) 00:14:16.421 fused_ordering(87) 00:14:16.421 fused_ordering(88) 00:14:16.421 fused_ordering(89) 00:14:16.421 fused_ordering(90) 00:14:16.421 fused_ordering(91) 00:14:16.421 fused_ordering(92) 00:14:16.421 fused_ordering(93) 00:14:16.421 fused_ordering(94) 00:14:16.421 fused_ordering(95) 00:14:16.421 fused_ordering(96) 00:14:16.421 fused_ordering(97) 00:14:16.421 fused_ordering(98) 00:14:16.421 fused_ordering(99) 00:14:16.421 fused_ordering(100) 00:14:16.421 fused_ordering(101) 00:14:16.421 fused_ordering(102) 00:14:16.421 fused_ordering(103) 00:14:16.421 fused_ordering(104) 00:14:16.422 fused_ordering(105) 00:14:16.422 fused_ordering(106) 00:14:16.422 fused_ordering(107) 00:14:16.422 fused_ordering(108) 00:14:16.422 fused_ordering(109) 00:14:16.422 fused_ordering(110) 00:14:16.422 fused_ordering(111) 00:14:16.422 fused_ordering(112) 00:14:16.422 fused_ordering(113) 00:14:16.422 fused_ordering(114) 00:14:16.422 fused_ordering(115) 00:14:16.422 fused_ordering(116) 00:14:16.422 fused_ordering(117) 00:14:16.422 fused_ordering(118) 00:14:16.422 fused_ordering(119) 00:14:16.422 fused_ordering(120) 00:14:16.422 fused_ordering(121) 00:14:16.422 fused_ordering(122) 00:14:16.422 fused_ordering(123) 00:14:16.422 fused_ordering(124) 00:14:16.422 fused_ordering(125) 00:14:16.422 fused_ordering(126) 00:14:16.422 fused_ordering(127) 00:14:16.422 fused_ordering(128) 00:14:16.422 fused_ordering(129) 00:14:16.422 fused_ordering(130) 00:14:16.422 fused_ordering(131) 00:14:16.422 fused_ordering(132) 00:14:16.422 fused_ordering(133) 00:14:16.422 fused_ordering(134) 00:14:16.422 fused_ordering(135) 00:14:16.422 fused_ordering(136) 00:14:16.422 fused_ordering(137) 00:14:16.422 fused_ordering(138) 00:14:16.422 fused_ordering(139) 00:14:16.422 fused_ordering(140) 00:14:16.422 fused_ordering(141) 00:14:16.422 fused_ordering(142) 00:14:16.422 fused_ordering(143) 00:14:16.422 fused_ordering(144) 00:14:16.422 fused_ordering(145) 00:14:16.422 fused_ordering(146) 00:14:16.422 fused_ordering(147) 00:14:16.422 fused_ordering(148) 00:14:16.422 fused_ordering(149) 00:14:16.422 fused_ordering(150) 00:14:16.422 fused_ordering(151) 00:14:16.422 fused_ordering(152) 00:14:16.422 fused_ordering(153) 00:14:16.422 fused_ordering(154) 00:14:16.422 fused_ordering(155) 00:14:16.422 fused_ordering(156) 00:14:16.422 fused_ordering(157) 00:14:16.422 fused_ordering(158) 00:14:16.422 fused_ordering(159) 00:14:16.422 fused_ordering(160) 00:14:16.422 fused_ordering(161) 00:14:16.422 fused_ordering(162) 00:14:16.422 fused_ordering(163) 00:14:16.422 fused_ordering(164) 00:14:16.422 fused_ordering(165) 00:14:16.422 fused_ordering(166) 00:14:16.422 fused_ordering(167) 00:14:16.422 fused_ordering(168) 00:14:16.422 fused_ordering(169) 00:14:16.422 fused_ordering(170) 00:14:16.422 fused_ordering(171) 00:14:16.422 fused_ordering(172) 00:14:16.422 fused_ordering(173) 00:14:16.422 fused_ordering(174) 00:14:16.422 fused_ordering(175) 00:14:16.422 fused_ordering(176) 00:14:16.422 fused_ordering(177) 00:14:16.422 fused_ordering(178) 00:14:16.422 fused_ordering(179) 00:14:16.422 fused_ordering(180) 00:14:16.422 fused_ordering(181) 00:14:16.422 fused_ordering(182) 00:14:16.422 fused_ordering(183) 00:14:16.422 fused_ordering(184) 00:14:16.422 fused_ordering(185) 00:14:16.422 fused_ordering(186) 00:14:16.422 fused_ordering(187) 00:14:16.422 fused_ordering(188) 00:14:16.422 fused_ordering(189) 00:14:16.422 fused_ordering(190) 00:14:16.422 fused_ordering(191) 00:14:16.422 fused_ordering(192) 00:14:16.422 fused_ordering(193) 00:14:16.422 fused_ordering(194) 00:14:16.422 fused_ordering(195) 00:14:16.422 fused_ordering(196) 00:14:16.422 fused_ordering(197) 00:14:16.422 fused_ordering(198) 00:14:16.422 fused_ordering(199) 00:14:16.422 fused_ordering(200) 00:14:16.422 fused_ordering(201) 00:14:16.422 fused_ordering(202) 00:14:16.422 fused_ordering(203) 00:14:16.422 fused_ordering(204) 00:14:16.422 fused_ordering(205) 00:14:16.680 fused_ordering(206) 00:14:16.680 fused_ordering(207) 00:14:16.680 fused_ordering(208) 00:14:16.680 fused_ordering(209) 00:14:16.680 fused_ordering(210) 00:14:16.680 fused_ordering(211) 00:14:16.680 fused_ordering(212) 00:14:16.680 fused_ordering(213) 00:14:16.680 fused_ordering(214) 00:14:16.680 fused_ordering(215) 00:14:16.680 fused_ordering(216) 00:14:16.680 fused_ordering(217) 00:14:16.680 fused_ordering(218) 00:14:16.680 fused_ordering(219) 00:14:16.680 fused_ordering(220) 00:14:16.680 fused_ordering(221) 00:14:16.680 fused_ordering(222) 00:14:16.680 fused_ordering(223) 00:14:16.680 fused_ordering(224) 00:14:16.680 fused_ordering(225) 00:14:16.680 fused_ordering(226) 00:14:16.680 fused_ordering(227) 00:14:16.680 fused_ordering(228) 00:14:16.680 fused_ordering(229) 00:14:16.680 fused_ordering(230) 00:14:16.680 fused_ordering(231) 00:14:16.680 fused_ordering(232) 00:14:16.680 fused_ordering(233) 00:14:16.680 fused_ordering(234) 00:14:16.680 fused_ordering(235) 00:14:16.680 fused_ordering(236) 00:14:16.680 fused_ordering(237) 00:14:16.680 fused_ordering(238) 00:14:16.680 fused_ordering(239) 00:14:16.680 fused_ordering(240) 00:14:16.680 fused_ordering(241) 00:14:16.680 fused_ordering(242) 00:14:16.680 fused_ordering(243) 00:14:16.680 fused_ordering(244) 00:14:16.680 fused_ordering(245) 00:14:16.680 fused_ordering(246) 00:14:16.680 fused_ordering(247) 00:14:16.680 fused_ordering(248) 00:14:16.680 fused_ordering(249) 00:14:16.680 fused_ordering(250) 00:14:16.680 fused_ordering(251) 00:14:16.680 fused_ordering(252) 00:14:16.680 fused_ordering(253) 00:14:16.680 fused_ordering(254) 00:14:16.680 fused_ordering(255) 00:14:16.680 fused_ordering(256) 00:14:16.680 fused_ordering(257) 00:14:16.680 fused_ordering(258) 00:14:16.680 fused_ordering(259) 00:14:16.680 fused_ordering(260) 00:14:16.680 fused_ordering(261) 00:14:16.680 fused_ordering(262) 00:14:16.680 fused_ordering(263) 00:14:16.680 fused_ordering(264) 00:14:16.680 fused_ordering(265) 00:14:16.680 fused_ordering(266) 00:14:16.680 fused_ordering(267) 00:14:16.680 fused_ordering(268) 00:14:16.680 fused_ordering(269) 00:14:16.680 fused_ordering(270) 00:14:16.680 fused_ordering(271) 00:14:16.680 fused_ordering(272) 00:14:16.680 fused_ordering(273) 00:14:16.680 fused_ordering(274) 00:14:16.680 fused_ordering(275) 00:14:16.680 fused_ordering(276) 00:14:16.680 fused_ordering(277) 00:14:16.680 fused_ordering(278) 00:14:16.680 fused_ordering(279) 00:14:16.680 fused_ordering(280) 00:14:16.680 fused_ordering(281) 00:14:16.680 fused_ordering(282) 00:14:16.680 fused_ordering(283) 00:14:16.680 fused_ordering(284) 00:14:16.680 fused_ordering(285) 00:14:16.680 fused_ordering(286) 00:14:16.680 fused_ordering(287) 00:14:16.680 fused_ordering(288) 00:14:16.680 fused_ordering(289) 00:14:16.680 fused_ordering(290) 00:14:16.680 fused_ordering(291) 00:14:16.680 fused_ordering(292) 00:14:16.680 fused_ordering(293) 00:14:16.680 fused_ordering(294) 00:14:16.680 fused_ordering(295) 00:14:16.680 fused_ordering(296) 00:14:16.680 fused_ordering(297) 00:14:16.680 fused_ordering(298) 00:14:16.680 fused_ordering(299) 00:14:16.680 fused_ordering(300) 00:14:16.680 fused_ordering(301) 00:14:16.680 fused_ordering(302) 00:14:16.680 fused_ordering(303) 00:14:16.680 fused_ordering(304) 00:14:16.680 fused_ordering(305) 00:14:16.680 fused_ordering(306) 00:14:16.680 fused_ordering(307) 00:14:16.680 fused_ordering(308) 00:14:16.680 fused_ordering(309) 00:14:16.680 fused_ordering(310) 00:14:16.680 fused_ordering(311) 00:14:16.680 fused_ordering(312) 00:14:16.680 fused_ordering(313) 00:14:16.680 fused_ordering(314) 00:14:16.680 fused_ordering(315) 00:14:16.680 fused_ordering(316) 00:14:16.680 fused_ordering(317) 00:14:16.680 fused_ordering(318) 00:14:16.680 fused_ordering(319) 00:14:16.680 fused_ordering(320) 00:14:16.680 fused_ordering(321) 00:14:16.680 fused_ordering(322) 00:14:16.680 fused_ordering(323) 00:14:16.680 fused_ordering(324) 00:14:16.680 fused_ordering(325) 00:14:16.680 fused_ordering(326) 00:14:16.680 fused_ordering(327) 00:14:16.680 fused_ordering(328) 00:14:16.680 fused_ordering(329) 00:14:16.680 fused_ordering(330) 00:14:16.680 fused_ordering(331) 00:14:16.680 fused_ordering(332) 00:14:16.680 fused_ordering(333) 00:14:16.680 fused_ordering(334) 00:14:16.680 fused_ordering(335) 00:14:16.680 fused_ordering(336) 00:14:16.680 fused_ordering(337) 00:14:16.680 fused_ordering(338) 00:14:16.680 fused_ordering(339) 00:14:16.680 fused_ordering(340) 00:14:16.680 fused_ordering(341) 00:14:16.680 fused_ordering(342) 00:14:16.680 fused_ordering(343) 00:14:16.680 fused_ordering(344) 00:14:16.680 fused_ordering(345) 00:14:16.680 fused_ordering(346) 00:14:16.680 fused_ordering(347) 00:14:16.680 fused_ordering(348) 00:14:16.680 fused_ordering(349) 00:14:16.680 fused_ordering(350) 00:14:16.680 fused_ordering(351) 00:14:16.680 fused_ordering(352) 00:14:16.680 fused_ordering(353) 00:14:16.680 fused_ordering(354) 00:14:16.680 fused_ordering(355) 00:14:16.680 fused_ordering(356) 00:14:16.680 fused_ordering(357) 00:14:16.680 fused_ordering(358) 00:14:16.680 fused_ordering(359) 00:14:16.680 fused_ordering(360) 00:14:16.680 fused_ordering(361) 00:14:16.680 fused_ordering(362) 00:14:16.680 fused_ordering(363) 00:14:16.680 fused_ordering(364) 00:14:16.680 fused_ordering(365) 00:14:16.680 fused_ordering(366) 00:14:16.680 fused_ordering(367) 00:14:16.680 fused_ordering(368) 00:14:16.680 fused_ordering(369) 00:14:16.680 fused_ordering(370) 00:14:16.680 fused_ordering(371) 00:14:16.680 fused_ordering(372) 00:14:16.680 fused_ordering(373) 00:14:16.680 fused_ordering(374) 00:14:16.680 fused_ordering(375) 00:14:16.680 fused_ordering(376) 00:14:16.680 fused_ordering(377) 00:14:16.680 fused_ordering(378) 00:14:16.680 fused_ordering(379) 00:14:16.680 fused_ordering(380) 00:14:16.681 fused_ordering(381) 00:14:16.681 fused_ordering(382) 00:14:16.681 fused_ordering(383) 00:14:16.681 fused_ordering(384) 00:14:16.681 fused_ordering(385) 00:14:16.681 fused_ordering(386) 00:14:16.681 fused_ordering(387) 00:14:16.681 fused_ordering(388) 00:14:16.681 fused_ordering(389) 00:14:16.681 fused_ordering(390) 00:14:16.681 fused_ordering(391) 00:14:16.681 fused_ordering(392) 00:14:16.681 fused_ordering(393) 00:14:16.681 fused_ordering(394) 00:14:16.681 fused_ordering(395) 00:14:16.681 fused_ordering(396) 00:14:16.681 fused_ordering(397) 00:14:16.681 fused_ordering(398) 00:14:16.681 fused_ordering(399) 00:14:16.681 fused_ordering(400) 00:14:16.681 fused_ordering(401) 00:14:16.681 fused_ordering(402) 00:14:16.681 fused_ordering(403) 00:14:16.681 fused_ordering(404) 00:14:16.681 fused_ordering(405) 00:14:16.681 fused_ordering(406) 00:14:16.681 fused_ordering(407) 00:14:16.681 fused_ordering(408) 00:14:16.681 fused_ordering(409) 00:14:16.681 fused_ordering(410) 00:14:17.246 fused_ordering(411) 00:14:17.246 fused_ordering(412) 00:14:17.246 fused_ordering(413) 00:14:17.246 fused_ordering(414) 00:14:17.246 fused_ordering(415) 00:14:17.246 fused_ordering(416) 00:14:17.247 fused_ordering(417) 00:14:17.247 fused_ordering(418) 00:14:17.247 fused_ordering(419) 00:14:17.247 fused_ordering(420) 00:14:17.247 fused_ordering(421) 00:14:17.247 fused_ordering(422) 00:14:17.247 fused_ordering(423) 00:14:17.247 fused_ordering(424) 00:14:17.247 fused_ordering(425) 00:14:17.247 fused_ordering(426) 00:14:17.247 fused_ordering(427) 00:14:17.247 fused_ordering(428) 00:14:17.247 fused_ordering(429) 00:14:17.247 fused_ordering(430) 00:14:17.247 fused_ordering(431) 00:14:17.247 fused_ordering(432) 00:14:17.247 fused_ordering(433) 00:14:17.247 fused_ordering(434) 00:14:17.247 fused_ordering(435) 00:14:17.247 fused_ordering(436) 00:14:17.247 fused_ordering(437) 00:14:17.247 fused_ordering(438) 00:14:17.247 fused_ordering(439) 00:14:17.247 fused_ordering(440) 00:14:17.247 fused_ordering(441) 00:14:17.247 fused_ordering(442) 00:14:17.247 fused_ordering(443) 00:14:17.247 fused_ordering(444) 00:14:17.247 fused_ordering(445) 00:14:17.247 fused_ordering(446) 00:14:17.247 fused_ordering(447) 00:14:17.247 fused_ordering(448) 00:14:17.247 fused_ordering(449) 00:14:17.247 fused_ordering(450) 00:14:17.247 fused_ordering(451) 00:14:17.247 fused_ordering(452) 00:14:17.247 fused_ordering(453) 00:14:17.247 fused_ordering(454) 00:14:17.247 fused_ordering(455) 00:14:17.247 fused_ordering(456) 00:14:17.247 fused_ordering(457) 00:14:17.247 fused_ordering(458) 00:14:17.247 fused_ordering(459) 00:14:17.247 fused_ordering(460) 00:14:17.247 fused_ordering(461) 00:14:17.247 fused_ordering(462) 00:14:17.247 fused_ordering(463) 00:14:17.247 fused_ordering(464) 00:14:17.247 fused_ordering(465) 00:14:17.247 fused_ordering(466) 00:14:17.247 fused_ordering(467) 00:14:17.247 fused_ordering(468) 00:14:17.247 fused_ordering(469) 00:14:17.247 fused_ordering(470) 00:14:17.247 fused_ordering(471) 00:14:17.247 fused_ordering(472) 00:14:17.247 fused_ordering(473) 00:14:17.247 fused_ordering(474) 00:14:17.247 fused_ordering(475) 00:14:17.247 fused_ordering(476) 00:14:17.247 fused_ordering(477) 00:14:17.247 fused_ordering(478) 00:14:17.247 fused_ordering(479) 00:14:17.247 fused_ordering(480) 00:14:17.247 fused_ordering(481) 00:14:17.247 fused_ordering(482) 00:14:17.247 fused_ordering(483) 00:14:17.247 fused_ordering(484) 00:14:17.247 fused_ordering(485) 00:14:17.247 fused_ordering(486) 00:14:17.247 fused_ordering(487) 00:14:17.247 fused_ordering(488) 00:14:17.247 fused_ordering(489) 00:14:17.247 fused_ordering(490) 00:14:17.247 fused_ordering(491) 00:14:17.247 fused_ordering(492) 00:14:17.247 fused_ordering(493) 00:14:17.247 fused_ordering(494) 00:14:17.247 fused_ordering(495) 00:14:17.247 fused_ordering(496) 00:14:17.247 fused_ordering(497) 00:14:17.247 fused_ordering(498) 00:14:17.247 fused_ordering(499) 00:14:17.247 fused_ordering(500) 00:14:17.247 fused_ordering(501) 00:14:17.247 fused_ordering(502) 00:14:17.247 fused_ordering(503) 00:14:17.247 fused_ordering(504) 00:14:17.247 fused_ordering(505) 00:14:17.247 fused_ordering(506) 00:14:17.247 fused_ordering(507) 00:14:17.247 fused_ordering(508) 00:14:17.247 fused_ordering(509) 00:14:17.247 fused_ordering(510) 00:14:17.247 fused_ordering(511) 00:14:17.247 fused_ordering(512) 00:14:17.247 fused_ordering(513) 00:14:17.247 fused_ordering(514) 00:14:17.247 fused_ordering(515) 00:14:17.247 fused_ordering(516) 00:14:17.247 fused_ordering(517) 00:14:17.247 fused_ordering(518) 00:14:17.247 fused_ordering(519) 00:14:17.247 fused_ordering(520) 00:14:17.247 fused_ordering(521) 00:14:17.247 fused_ordering(522) 00:14:17.247 fused_ordering(523) 00:14:17.247 fused_ordering(524) 00:14:17.247 fused_ordering(525) 00:14:17.247 fused_ordering(526) 00:14:17.247 fused_ordering(527) 00:14:17.247 fused_ordering(528) 00:14:17.247 fused_ordering(529) 00:14:17.247 fused_ordering(530) 00:14:17.247 fused_ordering(531) 00:14:17.247 fused_ordering(532) 00:14:17.247 fused_ordering(533) 00:14:17.247 fused_ordering(534) 00:14:17.247 fused_ordering(535) 00:14:17.247 fused_ordering(536) 00:14:17.247 fused_ordering(537) 00:14:17.247 fused_ordering(538) 00:14:17.247 fused_ordering(539) 00:14:17.247 fused_ordering(540) 00:14:17.247 fused_ordering(541) 00:14:17.247 fused_ordering(542) 00:14:17.247 fused_ordering(543) 00:14:17.247 fused_ordering(544) 00:14:17.247 fused_ordering(545) 00:14:17.247 fused_ordering(546) 00:14:17.247 fused_ordering(547) 00:14:17.247 fused_ordering(548) 00:14:17.247 fused_ordering(549) 00:14:17.247 fused_ordering(550) 00:14:17.247 fused_ordering(551) 00:14:17.247 fused_ordering(552) 00:14:17.247 fused_ordering(553) 00:14:17.247 fused_ordering(554) 00:14:17.247 fused_ordering(555) 00:14:17.247 fused_ordering(556) 00:14:17.247 fused_ordering(557) 00:14:17.247 fused_ordering(558) 00:14:17.247 fused_ordering(559) 00:14:17.247 fused_ordering(560) 00:14:17.247 fused_ordering(561) 00:14:17.247 fused_ordering(562) 00:14:17.247 fused_ordering(563) 00:14:17.247 fused_ordering(564) 00:14:17.247 fused_ordering(565) 00:14:17.247 fused_ordering(566) 00:14:17.247 fused_ordering(567) 00:14:17.247 fused_ordering(568) 00:14:17.247 fused_ordering(569) 00:14:17.247 fused_ordering(570) 00:14:17.247 fused_ordering(571) 00:14:17.247 fused_ordering(572) 00:14:17.247 fused_ordering(573) 00:14:17.247 fused_ordering(574) 00:14:17.247 fused_ordering(575) 00:14:17.247 fused_ordering(576) 00:14:17.247 fused_ordering(577) 00:14:17.247 fused_ordering(578) 00:14:17.247 fused_ordering(579) 00:14:17.247 fused_ordering(580) 00:14:17.247 fused_ordering(581) 00:14:17.247 fused_ordering(582) 00:14:17.247 fused_ordering(583) 00:14:17.247 fused_ordering(584) 00:14:17.247 fused_ordering(585) 00:14:17.247 fused_ordering(586) 00:14:17.247 fused_ordering(587) 00:14:17.247 fused_ordering(588) 00:14:17.247 fused_ordering(589) 00:14:17.247 fused_ordering(590) 00:14:17.247 fused_ordering(591) 00:14:17.247 fused_ordering(592) 00:14:17.247 fused_ordering(593) 00:14:17.247 fused_ordering(594) 00:14:17.247 fused_ordering(595) 00:14:17.247 fused_ordering(596) 00:14:17.247 fused_ordering(597) 00:14:17.247 fused_ordering(598) 00:14:17.247 fused_ordering(599) 00:14:17.247 fused_ordering(600) 00:14:17.247 fused_ordering(601) 00:14:17.247 fused_ordering(602) 00:14:17.247 fused_ordering(603) 00:14:17.247 fused_ordering(604) 00:14:17.247 fused_ordering(605) 00:14:17.247 fused_ordering(606) 00:14:17.247 fused_ordering(607) 00:14:17.247 fused_ordering(608) 00:14:17.247 fused_ordering(609) 00:14:17.247 fused_ordering(610) 00:14:17.247 fused_ordering(611) 00:14:17.247 fused_ordering(612) 00:14:17.247 fused_ordering(613) 00:14:17.247 fused_ordering(614) 00:14:17.247 fused_ordering(615) 00:14:17.814 fused_ordering(616) 00:14:17.814 fused_ordering(617) 00:14:17.814 fused_ordering(618) 00:14:17.814 fused_ordering(619) 00:14:17.814 fused_ordering(620) 00:14:17.814 fused_ordering(621) 00:14:17.814 fused_ordering(622) 00:14:17.814 fused_ordering(623) 00:14:17.814 fused_ordering(624) 00:14:17.814 fused_ordering(625) 00:14:17.814 fused_ordering(626) 00:14:17.814 fused_ordering(627) 00:14:17.814 fused_ordering(628) 00:14:17.814 fused_ordering(629) 00:14:17.814 fused_ordering(630) 00:14:17.814 fused_ordering(631) 00:14:17.814 fused_ordering(632) 00:14:17.814 fused_ordering(633) 00:14:17.814 fused_ordering(634) 00:14:17.814 fused_ordering(635) 00:14:17.814 fused_ordering(636) 00:14:17.814 fused_ordering(637) 00:14:17.814 fused_ordering(638) 00:14:17.814 fused_ordering(639) 00:14:17.814 fused_ordering(640) 00:14:17.814 fused_ordering(641) 00:14:17.814 fused_ordering(642) 00:14:17.814 fused_ordering(643) 00:14:17.814 fused_ordering(644) 00:14:17.814 fused_ordering(645) 00:14:17.814 fused_ordering(646) 00:14:17.814 fused_ordering(647) 00:14:17.814 fused_ordering(648) 00:14:17.814 fused_ordering(649) 00:14:17.814 fused_ordering(650) 00:14:17.814 fused_ordering(651) 00:14:17.814 fused_ordering(652) 00:14:17.814 fused_ordering(653) 00:14:17.814 fused_ordering(654) 00:14:17.814 fused_ordering(655) 00:14:17.814 fused_ordering(656) 00:14:17.814 fused_ordering(657) 00:14:17.814 fused_ordering(658) 00:14:17.814 fused_ordering(659) 00:14:17.814 fused_ordering(660) 00:14:17.814 fused_ordering(661) 00:14:17.814 fused_ordering(662) 00:14:17.814 fused_ordering(663) 00:14:17.814 fused_ordering(664) 00:14:17.814 fused_ordering(665) 00:14:17.814 fused_ordering(666) 00:14:17.814 fused_ordering(667) 00:14:17.814 fused_ordering(668) 00:14:17.814 fused_ordering(669) 00:14:17.814 fused_ordering(670) 00:14:17.814 fused_ordering(671) 00:14:17.814 fused_ordering(672) 00:14:17.814 fused_ordering(673) 00:14:17.814 fused_ordering(674) 00:14:17.814 fused_ordering(675) 00:14:17.814 fused_ordering(676) 00:14:17.814 fused_ordering(677) 00:14:17.814 fused_ordering(678) 00:14:17.814 fused_ordering(679) 00:14:17.814 fused_ordering(680) 00:14:17.814 fused_ordering(681) 00:14:17.814 fused_ordering(682) 00:14:17.814 fused_ordering(683) 00:14:17.814 fused_ordering(684) 00:14:17.814 fused_ordering(685) 00:14:17.814 fused_ordering(686) 00:14:17.814 fused_ordering(687) 00:14:17.814 fused_ordering(688) 00:14:17.814 fused_ordering(689) 00:14:17.814 fused_ordering(690) 00:14:17.814 fused_ordering(691) 00:14:17.814 fused_ordering(692) 00:14:17.814 fused_ordering(693) 00:14:17.814 fused_ordering(694) 00:14:17.814 fused_ordering(695) 00:14:17.814 fused_ordering(696) 00:14:17.814 fused_ordering(697) 00:14:17.814 fused_ordering(698) 00:14:17.814 fused_ordering(699) 00:14:17.814 fused_ordering(700) 00:14:17.814 fused_ordering(701) 00:14:17.814 fused_ordering(702) 00:14:17.814 fused_ordering(703) 00:14:17.814 fused_ordering(704) 00:14:17.814 fused_ordering(705) 00:14:17.814 fused_ordering(706) 00:14:17.814 fused_ordering(707) 00:14:17.814 fused_ordering(708) 00:14:17.814 fused_ordering(709) 00:14:17.814 fused_ordering(710) 00:14:17.814 fused_ordering(711) 00:14:17.814 fused_ordering(712) 00:14:17.814 fused_ordering(713) 00:14:17.814 fused_ordering(714) 00:14:17.814 fused_ordering(715) 00:14:17.814 fused_ordering(716) 00:14:17.814 fused_ordering(717) 00:14:17.814 fused_ordering(718) 00:14:17.814 fused_ordering(719) 00:14:17.814 fused_ordering(720) 00:14:17.814 fused_ordering(721) 00:14:17.814 fused_ordering(722) 00:14:17.814 fused_ordering(723) 00:14:17.814 fused_ordering(724) 00:14:17.814 fused_ordering(725) 00:14:17.814 fused_ordering(726) 00:14:17.814 fused_ordering(727) 00:14:17.814 fused_ordering(728) 00:14:17.814 fused_ordering(729) 00:14:17.814 fused_ordering(730) 00:14:17.814 fused_ordering(731) 00:14:17.814 fused_ordering(732) 00:14:17.814 fused_ordering(733) 00:14:17.814 fused_ordering(734) 00:14:17.814 fused_ordering(735) 00:14:17.814 fused_ordering(736) 00:14:17.814 fused_ordering(737) 00:14:17.814 fused_ordering(738) 00:14:17.814 fused_ordering(739) 00:14:17.814 fused_ordering(740) 00:14:17.814 fused_ordering(741) 00:14:17.814 fused_ordering(742) 00:14:17.814 fused_ordering(743) 00:14:17.814 fused_ordering(744) 00:14:17.814 fused_ordering(745) 00:14:17.814 fused_ordering(746) 00:14:17.814 fused_ordering(747) 00:14:17.814 fused_ordering(748) 00:14:17.814 fused_ordering(749) 00:14:17.814 fused_ordering(750) 00:14:17.814 fused_ordering(751) 00:14:17.814 fused_ordering(752) 00:14:17.814 fused_ordering(753) 00:14:17.814 fused_ordering(754) 00:14:17.814 fused_ordering(755) 00:14:17.814 fused_ordering(756) 00:14:17.814 fused_ordering(757) 00:14:17.814 fused_ordering(758) 00:14:17.814 fused_ordering(759) 00:14:17.814 fused_ordering(760) 00:14:17.814 fused_ordering(761) 00:14:17.814 fused_ordering(762) 00:14:17.814 fused_ordering(763) 00:14:17.814 fused_ordering(764) 00:14:17.814 fused_ordering(765) 00:14:17.814 fused_ordering(766) 00:14:17.814 fused_ordering(767) 00:14:17.814 fused_ordering(768) 00:14:17.814 fused_ordering(769) 00:14:17.814 fused_ordering(770) 00:14:17.814 fused_ordering(771) 00:14:17.814 fused_ordering(772) 00:14:17.814 fused_ordering(773) 00:14:17.814 fused_ordering(774) 00:14:17.814 fused_ordering(775) 00:14:17.814 fused_ordering(776) 00:14:17.814 fused_ordering(777) 00:14:17.814 fused_ordering(778) 00:14:17.814 fused_ordering(779) 00:14:17.814 fused_ordering(780) 00:14:17.814 fused_ordering(781) 00:14:17.814 fused_ordering(782) 00:14:17.814 fused_ordering(783) 00:14:17.814 fused_ordering(784) 00:14:17.814 fused_ordering(785) 00:14:17.814 fused_ordering(786) 00:14:17.814 fused_ordering(787) 00:14:17.814 fused_ordering(788) 00:14:17.814 fused_ordering(789) 00:14:17.814 fused_ordering(790) 00:14:17.814 fused_ordering(791) 00:14:17.814 fused_ordering(792) 00:14:17.814 fused_ordering(793) 00:14:17.814 fused_ordering(794) 00:14:17.814 fused_ordering(795) 00:14:17.814 fused_ordering(796) 00:14:17.814 fused_ordering(797) 00:14:17.814 fused_ordering(798) 00:14:17.814 fused_ordering(799) 00:14:17.814 fused_ordering(800) 00:14:17.814 fused_ordering(801) 00:14:17.814 fused_ordering(802) 00:14:17.814 fused_ordering(803) 00:14:17.814 fused_ordering(804) 00:14:17.814 fused_ordering(805) 00:14:17.814 fused_ordering(806) 00:14:17.814 fused_ordering(807) 00:14:17.814 fused_ordering(808) 00:14:17.814 fused_ordering(809) 00:14:17.814 fused_ordering(810) 00:14:17.814 fused_ordering(811) 00:14:17.814 fused_ordering(812) 00:14:17.814 fused_ordering(813) 00:14:17.814 fused_ordering(814) 00:14:17.814 fused_ordering(815) 00:14:17.814 fused_ordering(816) 00:14:17.814 fused_ordering(817) 00:14:17.814 fused_ordering(818) 00:14:17.814 fused_ordering(819) 00:14:17.814 fused_ordering(820) 00:14:18.759 fused_ordering(821) 00:14:18.759 fused_ordering(822) 00:14:18.759 fused_ordering(823) 00:14:18.759 fused_ordering(824) 00:14:18.759 fused_ordering(825) 00:14:18.759 fused_ordering(826) 00:14:18.759 fused_ordering(827) 00:14:18.759 fused_ordering(828) 00:14:18.759 fused_ordering(829) 00:14:18.759 fused_ordering(830) 00:14:18.759 fused_ordering(831) 00:14:18.759 fused_ordering(832) 00:14:18.759 fused_ordering(833) 00:14:18.759 fused_ordering(834) 00:14:18.759 fused_ordering(835) 00:14:18.759 fused_ordering(836) 00:14:18.759 fused_ordering(837) 00:14:18.759 fused_ordering(838) 00:14:18.759 fused_ordering(839) 00:14:18.759 fused_ordering(840) 00:14:18.759 fused_ordering(841) 00:14:18.759 fused_ordering(842) 00:14:18.759 fused_ordering(843) 00:14:18.759 fused_ordering(844) 00:14:18.759 fused_ordering(845) 00:14:18.759 fused_ordering(846) 00:14:18.759 fused_ordering(847) 00:14:18.759 fused_ordering(848) 00:14:18.759 fused_ordering(849) 00:14:18.759 fused_ordering(850) 00:14:18.759 fused_ordering(851) 00:14:18.759 fused_ordering(852) 00:14:18.759 fused_ordering(853) 00:14:18.759 fused_ordering(854) 00:14:18.759 fused_ordering(855) 00:14:18.759 fused_ordering(856) 00:14:18.759 fused_ordering(857) 00:14:18.759 fused_ordering(858) 00:14:18.759 fused_ordering(859) 00:14:18.759 fused_ordering(860) 00:14:18.759 fused_ordering(861) 00:14:18.759 fused_ordering(862) 00:14:18.759 fused_ordering(863) 00:14:18.759 fused_ordering(864) 00:14:18.759 fused_ordering(865) 00:14:18.759 fused_ordering(866) 00:14:18.759 fused_ordering(867) 00:14:18.759 fused_ordering(868) 00:14:18.759 fused_ordering(869) 00:14:18.759 fused_ordering(870) 00:14:18.759 fused_ordering(871) 00:14:18.759 fused_ordering(872) 00:14:18.759 fused_ordering(873) 00:14:18.759 fused_ordering(874) 00:14:18.759 fused_ordering(875) 00:14:18.759 fused_ordering(876) 00:14:18.759 fused_ordering(877) 00:14:18.759 fused_ordering(878) 00:14:18.759 fused_ordering(879) 00:14:18.759 fused_ordering(880) 00:14:18.759 fused_ordering(881) 00:14:18.759 fused_ordering(882) 00:14:18.759 fused_ordering(883) 00:14:18.759 fused_ordering(884) 00:14:18.759 fused_ordering(885) 00:14:18.759 fused_ordering(886) 00:14:18.759 fused_ordering(887) 00:14:18.759 fused_ordering(888) 00:14:18.759 fused_ordering(889) 00:14:18.759 fused_ordering(890) 00:14:18.759 fused_ordering(891) 00:14:18.759 fused_ordering(892) 00:14:18.759 fused_ordering(893) 00:14:18.759 fused_ordering(894) 00:14:18.759 fused_ordering(895) 00:14:18.759 fused_ordering(896) 00:14:18.759 fused_ordering(897) 00:14:18.759 fused_ordering(898) 00:14:18.759 fused_ordering(899) 00:14:18.759 fused_ordering(900) 00:14:18.759 fused_ordering(901) 00:14:18.759 fused_ordering(902) 00:14:18.759 fused_ordering(903) 00:14:18.759 fused_ordering(904) 00:14:18.759 fused_ordering(905) 00:14:18.759 fused_ordering(906) 00:14:18.759 fused_ordering(907) 00:14:18.759 fused_ordering(908) 00:14:18.759 fused_ordering(909) 00:14:18.759 fused_ordering(910) 00:14:18.759 fused_ordering(911) 00:14:18.759 fused_ordering(912) 00:14:18.759 fused_ordering(913) 00:14:18.759 fused_ordering(914) 00:14:18.759 fused_ordering(915) 00:14:18.759 fused_ordering(916) 00:14:18.759 fused_ordering(917) 00:14:18.759 fused_ordering(918) 00:14:18.759 fused_ordering(919) 00:14:18.759 fused_ordering(920) 00:14:18.759 fused_ordering(921) 00:14:18.759 fused_ordering(922) 00:14:18.759 fused_ordering(923) 00:14:18.759 fused_ordering(924) 00:14:18.759 fused_ordering(925) 00:14:18.759 fused_ordering(926) 00:14:18.759 fused_ordering(927) 00:14:18.759 fused_ordering(928) 00:14:18.759 fused_ordering(929) 00:14:18.759 fused_ordering(930) 00:14:18.759 fused_ordering(931) 00:14:18.759 fused_ordering(932) 00:14:18.759 fused_ordering(933) 00:14:18.759 fused_ordering(934) 00:14:18.759 fused_ordering(935) 00:14:18.760 fused_ordering(936) 00:14:18.760 fused_ordering(937) 00:14:18.760 fused_ordering(938) 00:14:18.760 fused_ordering(939) 00:14:18.760 fused_ordering(940) 00:14:18.760 fused_ordering(941) 00:14:18.760 fused_ordering(942) 00:14:18.760 fused_ordering(943) 00:14:18.760 fused_ordering(944) 00:14:18.760 fused_ordering(945) 00:14:18.760 fused_ordering(946) 00:14:18.760 fused_ordering(947) 00:14:18.760 fused_ordering(948) 00:14:18.760 fused_ordering(949) 00:14:18.760 fused_ordering(950) 00:14:18.760 fused_ordering(951) 00:14:18.760 fused_ordering(952) 00:14:18.760 fused_ordering(953) 00:14:18.760 fused_ordering(954) 00:14:18.760 fused_ordering(955) 00:14:18.760 fused_ordering(956) 00:14:18.760 fused_ordering(957) 00:14:18.760 fused_ordering(958) 00:14:18.760 fused_ordering(959) 00:14:18.760 fused_ordering(960) 00:14:18.760 fused_ordering(961) 00:14:18.760 fused_ordering(962) 00:14:18.760 fused_ordering(963) 00:14:18.760 fused_ordering(964) 00:14:18.760 fused_ordering(965) 00:14:18.760 fused_ordering(966) 00:14:18.760 fused_ordering(967) 00:14:18.760 fused_ordering(968) 00:14:18.760 fused_ordering(969) 00:14:18.760 fused_ordering(970) 00:14:18.760 fused_ordering(971) 00:14:18.760 fused_ordering(972) 00:14:18.760 fused_ordering(973) 00:14:18.760 fused_ordering(974) 00:14:18.760 fused_ordering(975) 00:14:18.760 fused_ordering(976) 00:14:18.760 fused_ordering(977) 00:14:18.760 fused_ordering(978) 00:14:18.760 fused_ordering(979) 00:14:18.760 fused_ordering(980) 00:14:18.760 fused_ordering(981) 00:14:18.760 fused_ordering(982) 00:14:18.760 fused_ordering(983) 00:14:18.760 fused_ordering(984) 00:14:18.760 fused_ordering(985) 00:14:18.760 fused_ordering(986) 00:14:18.760 fused_ordering(987) 00:14:18.760 fused_ordering(988) 00:14:18.760 fused_ordering(989) 00:14:18.760 fused_ordering(990) 00:14:18.760 fused_ordering(991) 00:14:18.760 fused_ordering(992) 00:14:18.760 fused_ordering(993) 00:14:18.760 fused_ordering(994) 00:14:18.760 fused_ordering(995) 00:14:18.760 fused_ordering(996) 00:14:18.760 fused_ordering(997) 00:14:18.760 fused_ordering(998) 00:14:18.760 fused_ordering(999) 00:14:18.760 fused_ordering(1000) 00:14:18.760 fused_ordering(1001) 00:14:18.760 fused_ordering(1002) 00:14:18.760 fused_ordering(1003) 00:14:18.760 fused_ordering(1004) 00:14:18.760 fused_ordering(1005) 00:14:18.760 fused_ordering(1006) 00:14:18.760 fused_ordering(1007) 00:14:18.760 fused_ordering(1008) 00:14:18.760 fused_ordering(1009) 00:14:18.760 fused_ordering(1010) 00:14:18.760 fused_ordering(1011) 00:14:18.760 fused_ordering(1012) 00:14:18.760 fused_ordering(1013) 00:14:18.760 fused_ordering(1014) 00:14:18.760 fused_ordering(1015) 00:14:18.760 fused_ordering(1016) 00:14:18.760 fused_ordering(1017) 00:14:18.760 fused_ordering(1018) 00:14:18.760 fused_ordering(1019) 00:14:18.760 fused_ordering(1020) 00:14:18.760 fused_ordering(1021) 00:14:18.760 fused_ordering(1022) 00:14:18.760 fused_ordering(1023) 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:18.760 rmmod nvme_tcp 00:14:18.760 rmmod nvme_fabrics 00:14:18.760 rmmod nvme_keyring 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 855837 ']' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 855837 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 855837 ']' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 855837 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 855837 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 855837' 00:14:18.760 killing process with pid 855837 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 855837 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 855837 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:18.760 21:20:53 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.291 21:20:55 nvmf_tcp.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:21.291 00:14:21.291 real 0m7.622s 00:14:21.291 user 0m5.274s 00:14:21.291 sys 0m3.374s 00:14:21.291 21:20:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:21.291 21:20:55 nvmf_tcp.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:21.291 ************************************ 00:14:21.291 END TEST nvmf_fused_ordering 00:14:21.291 ************************************ 00:14:21.291 21:20:55 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:21.291 21:20:55 nvmf_tcp -- nvmf/nvmf.sh@35 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.291 21:20:55 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:21.291 21:20:55 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:21.291 21:20:55 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:21.291 ************************************ 00:14:21.291 START TEST nvmf_delete_subsystem 00:14:21.291 ************************************ 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:14:21.291 * Looking for test storage... 00:14:21.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.291 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:14:21.292 21:20:55 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.188 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:23.189 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:23.189 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:23.189 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:23.189 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:23.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:23.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:14:23.189 00:14:23.189 --- 10.0.0.2 ping statistics --- 00:14:23.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.189 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:23.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:23.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:14:23.189 00:14:23.189 --- 10.0.0.1 ping statistics --- 00:14:23.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:23.189 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=858176 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 858176 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 858176 ']' 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:23.189 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:23.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:23.190 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:23.190 21:20:57 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.190 [2024-07-11 21:20:57.825787] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:23.190 [2024-07-11 21:20:57.825902] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:23.190 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.190 [2024-07-11 21:20:57.891618] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:14:23.448 [2024-07-11 21:20:57.981749] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:23.448 [2024-07-11 21:20:57.981824] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:23.448 [2024-07-11 21:20:57.981837] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:23.448 [2024-07-11 21:20:57.981863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:23.448 [2024-07-11 21:20:57.981872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:23.448 [2024-07-11 21:20:57.981955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:23.448 [2024-07-11 21:20:57.981960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 [2024-07-11 21:20:58.129994] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 [2024-07-11 21:20:58.146397] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 NULL1 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 Delay0 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=858206 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:23.448 21:20:58 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:14:23.448 EAL: No free 2048 kB hugepages reported on node 1 00:14:23.705 [2024-07-11 21:20:58.221003] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:25.601 21:21:00 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:25.601 21:21:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:25.601 21:21:00 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.601 starting I/O failed: -6 00:14:25.601 Write completed with error (sct=0, sc=8) 00:14:25.601 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 [2024-07-11 21:21:00.311747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3d7000d600 is same with the state(5) to be set 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Write completed with error (sct=0, sc=8) 00:14:25.602 Read completed with error (sct=0, sc=8) 00:14:25.602 starting I/O failed: -6 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Read completed with error (sct=0, sc=8) 00:14:25.603 Write completed with error (sct=0, sc=8) 00:14:26.534 [2024-07-11 21:21:01.276128] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd06630 is same with the state(5) to be set 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 [2024-07-11 21:21:01.312592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9ec0 is same with the state(5) to be set 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 [2024-07-11 21:21:01.313956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3d70000c00 is same with the state(5) to be set 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 [2024-07-11 21:21:01.314561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f3d7000d2f0 is same with the state(5) to be set 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.792 Read completed with error (sct=0, sc=8) 00:14:26.792 Write completed with error (sct=0, sc=8) 00:14:26.793 [2024-07-11 21:21:01.314860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xce9b00 is same with the state(5) to be set 00:14:26.793 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:26.793 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:14:26.793 Initializing NVMe Controllers 00:14:26.793 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:26.793 Controller IO queue size 128, less than required. 00:14:26.793 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:26.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:26.793 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:26.793 Initialization complete. Launching workers. 00:14:26.793 ======================================================== 00:14:26.793 Latency(us) 00:14:26.793 Device Information : IOPS MiB/s Average min max 00:14:26.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 188.73 0.09 904639.36 830.41 2000687.37 00:14:26.793 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.40 0.08 959939.88 663.38 2002843.96 00:14:26.793 ======================================================== 00:14:26.793 Total : 352.13 0.17 930300.67 663.38 2002843.96 00:14:26.793 00:14:26.793 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 858206 00:14:26.793 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:14:26.793 [2024-07-11 21:21:01.316075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd06630 (9): Bad file descriptor 00:14:26.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:14:27.050 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:14:27.050 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 858206 00:14:27.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (858206) - No such process 00:14:27.050 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 858206 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 858206 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 858206 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.051 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.309 [2024-07-11 21:21:01.832927] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=858711 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:27.309 21:21:01 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:27.309 EAL: No free 2048 kB hugepages reported on node 1 00:14:27.309 [2024-07-11 21:21:01.893693] subsystem.c:1568:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:14:27.873 21:21:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:27.873 21:21:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:27.873 21:21:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.131 21:21:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.131 21:21:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:28.131 21:21:02 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:28.693 21:21:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:28.693 21:21:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:28.693 21:21:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.255 21:21:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.255 21:21:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:29.255 21:21:03 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:29.818 21:21:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:29.818 21:21:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:29.818 21:21:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.382 21:21:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.382 21:21:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:30.382 21:21:04 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:14:30.382 Initializing NVMe Controllers 00:14:30.382 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:14:30.382 Controller IO queue size 128, less than required. 00:14:30.382 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:14:30.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:14:30.382 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:14:30.382 Initialization complete. Launching workers. 00:14:30.382 ======================================================== 00:14:30.382 Latency(us) 00:14:30.382 Device Information : IOPS MiB/s Average min max 00:14:30.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003425.11 1000180.84 1011242.60 00:14:30.382 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1004828.81 1000201.18 1041567.67 00:14:30.382 ======================================================== 00:14:30.382 Total : 256.00 0.12 1004126.96 1000180.84 1041567.67 00:14:30.382 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 858711 00:14:30.638 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (858711) - No such process 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 858711 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:30.638 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:30.638 rmmod nvme_tcp 00:14:30.638 rmmod nvme_fabrics 00:14:30.638 rmmod nvme_keyring 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 858176 ']' 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 858176 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 858176 ']' 00:14:30.895 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 858176 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 858176 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 858176' 00:14:30.896 killing process with pid 858176 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 858176 00:14:30.896 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 858176 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:31.153 21:21:05 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.055 21:21:07 nvmf_tcp.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:33.055 00:14:33.055 real 0m12.100s 00:14:33.055 user 0m27.512s 00:14:33.055 sys 0m2.870s 00:14:33.055 21:21:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:33.055 21:21:07 nvmf_tcp.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:14:33.055 ************************************ 00:14:33.055 END TEST nvmf_delete_subsystem 00:14:33.055 ************************************ 00:14:33.055 21:21:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:33.055 21:21:07 nvmf_tcp -- nvmf/nvmf.sh@36 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:33.055 21:21:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:33.055 21:21:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:33.055 21:21:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:33.055 ************************************ 00:14:33.055 START TEST nvmf_ns_masking 00:14:33.055 ************************************ 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:33.055 * Looking for test storage... 00:14:33.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:33.055 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:33.056 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f1239146-3d0b-4117-a6e7-0e538b2903d5 00:14:33.343 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:33.343 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=d1d02e9b-1a23-4338-9952-b0d749c7b20e 00:14:33.343 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:33.343 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:33.343 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=0427ab32-d200-4e04-a598-68ac441665a9 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:33.344 21:21:07 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.249 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:35.249 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:35.249 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:35.249 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:35.249 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:35.249 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:35.250 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:35.250 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:35.250 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:35.250 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:35.250 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:35.250 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:14:35.250 00:14:35.250 --- 10.0.0.2 ping statistics --- 00:14:35.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.250 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:35.250 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:35.250 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.108 ms 00:14:35.250 00:14:35.250 --- 10.0.0.1 ping statistics --- 00:14:35.250 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:35.250 rtt min/avg/max/mdev = 0.108/0.108/0.108/0.000 ms 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=861578 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 861578 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 861578 ']' 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:35.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:35.250 21:21:09 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.250 [2024-07-11 21:21:09.889954] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:35.250 [2024-07-11 21:21:09.890039] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:35.250 EAL: No free 2048 kB hugepages reported on node 1 00:14:35.250 [2024-07-11 21:21:09.957517] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:35.508 [2024-07-11 21:21:10.053526] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:35.508 [2024-07-11 21:21:10.053590] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:35.508 [2024-07-11 21:21:10.053610] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:35.509 [2024-07-11 21:21:10.053628] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:35.509 [2024-07-11 21:21:10.053642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:35.509 [2024-07-11 21:21:10.053676] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:35.509 21:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:35.766 [2024-07-11 21:21:10.465987] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:35.766 21:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:35.766 21:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:35.766 21:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:36.024 Malloc1 00:14:36.024 21:21:10 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:36.283 Malloc2 00:14:36.283 21:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:36.849 21:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:37.107 21:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:37.107 [2024-07-11 21:21:11.867620] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:37.365 21:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:37.365 21:21:11 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0427ab32-d200-4e04-a598-68ac441665a9 -a 10.0.0.2 -s 4420 -i 4 00:14:37.365 21:21:12 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:37.365 21:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:37.365 21:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:37.365 21:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:37.365 21:21:12 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:39.888 [ 0]:0x1 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f9c3866e1784bc590172fa68998ea56 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f9c3866e1784bc590172fa68998ea56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:39.888 [ 0]:0x1 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f9c3866e1784bc590172fa68998ea56 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f9c3866e1784bc590172fa68998ea56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:39.888 [ 1]:0x2 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:39.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.888 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:40.145 21:21:14 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:40.402 21:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:40.402 21:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0427ab32-d200-4e04-a598-68ac441665a9 -a 10.0.0.2 -s 4420 -i 4 00:14:40.658 21:21:15 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:40.658 21:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:40.658 21:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:40.658 21:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:40.658 21:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:40.658 21:21:15 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:42.554 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:42.812 [ 0]:0x2 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:42.812 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:43.070 [ 0]:0x1 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f9c3866e1784bc590172fa68998ea56 00:14:43.070 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f9c3866e1784bc590172fa68998ea56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:43.346 [ 1]:0x2 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.346 21:21:17 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:43.603 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:43.604 [ 0]:0x2 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:43.604 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.604 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:43.862 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:43.862 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 0427ab32-d200-4e04-a598-68ac441665a9 -a 10.0.0.2 -s 4420 -i 4 00:14:44.121 21:21:18 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:44.121 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:44.121 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:44.121 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:44.121 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:44.121 21:21:18 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:46.019 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.277 [ 0]:0x1 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=1f9c3866e1784bc590172fa68998ea56 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 1f9c3866e1784bc590172fa68998ea56 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:46.277 [ 1]:0x2 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.277 21:21:20 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:46.535 [ 0]:0x2 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:46.535 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:46.792 [2024-07-11 21:21:21.545190] nvmf_rpc.c:1791:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:46.792 request: 00:14:46.792 { 00:14:46.792 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:46.792 "nsid": 2, 00:14:46.792 "host": "nqn.2016-06.io.spdk:host1", 00:14:46.792 "method": "nvmf_ns_remove_host", 00:14:46.792 "req_id": 1 00:14:46.792 } 00:14:46.792 Got JSON-RPC error response 00:14:46.792 response: 00:14:46.792 { 00:14:46.792 "code": -32602, 00:14:46.792 "message": "Invalid parameters" 00:14:46.792 } 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:47.050 [ 0]:0x2 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d85f9a84619d4df599ca4cb6a38fa02d 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d85f9a84619d4df599ca4cb6a38fa02d != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:47.050 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:47.309 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=863193 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 863193 /var/tmp/host.sock 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 863193 ']' 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:47.309 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:47.309 21:21:21 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:47.309 [2024-07-11 21:21:21.899845] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:47.309 [2024-07-11 21:21:21.899931] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid863193 ] 00:14:47.309 EAL: No free 2048 kB hugepages reported on node 1 00:14:47.309 [2024-07-11 21:21:21.964035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:47.309 [2024-07-11 21:21:22.058280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:47.567 21:21:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:47.567 21:21:22 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:47.567 21:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:48.132 21:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:48.390 21:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f1239146-3d0b-4117-a6e7-0e538b2903d5 00:14:48.390 21:21:22 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:48.390 21:21:22 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F12391463D0B4117A6E70E538B2903D5 -i 00:14:48.647 21:21:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid d1d02e9b-1a23-4338-9952-b0d749c7b20e 00:14:48.647 21:21:23 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:48.647 21:21:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g D1D02E9B1A2343389952B0D749C7B20E -i 00:14:48.903 21:21:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:49.160 21:21:23 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:49.417 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:49.417 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:49.674 nvme0n1 00:14:49.674 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:49.674 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:50.271 nvme1n2 00:14:50.271 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:50.271 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:50.271 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:50.271 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:50.271 21:21:24 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:50.529 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:50.529 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:50.529 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:50.529 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:50.787 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f1239146-3d0b-4117-a6e7-0e538b2903d5 == \f\1\2\3\9\1\4\6\-\3\d\0\b\-\4\1\1\7\-\a\6\e\7\-\0\e\5\3\8\b\2\9\0\3\d\5 ]] 00:14:50.787 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:50.787 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:50.787 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ d1d02e9b-1a23-4338-9952-b0d749c7b20e == \d\1\d\0\2\e\9\b\-\1\a\2\3\-\4\3\3\8\-\9\9\5\2\-\b\0\d\7\4\9\c\7\b\2\0\e ]] 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 863193 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 863193 ']' 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 863193 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 863193 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 863193' 00:14:51.045 killing process with pid 863193 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 863193 00:14:51.045 21:21:25 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 863193 00:14:51.303 21:21:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:51.561 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:51.819 rmmod nvme_tcp 00:14:51.819 rmmod nvme_fabrics 00:14:51.819 rmmod nvme_keyring 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 861578 ']' 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 861578 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 861578 ']' 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 861578 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 861578 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 861578' 00:14:51.819 killing process with pid 861578 00:14:51.819 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 861578 00:14:51.820 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 861578 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:52.078 21:21:26 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:53.982 21:21:28 nvmf_tcp.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:53.982 00:14:53.982 real 0m20.981s 00:14:53.982 user 0m27.498s 00:14:53.982 sys 0m4.093s 00:14:53.982 21:21:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:53.982 21:21:28 nvmf_tcp.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:53.982 ************************************ 00:14:53.982 END TEST nvmf_ns_masking 00:14:53.982 ************************************ 00:14:54.241 21:21:28 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:14:54.241 21:21:28 nvmf_tcp -- nvmf/nvmf.sh@37 -- # [[ 1 -eq 1 ]] 00:14:54.241 21:21:28 nvmf_tcp -- nvmf/nvmf.sh@38 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:54.241 21:21:28 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:54.241 21:21:28 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:54.241 21:21:28 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:14:54.241 ************************************ 00:14:54.241 START TEST nvmf_nvme_cli 00:14:54.241 ************************************ 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:54.241 * Looking for test storage... 00:14:54.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:54.241 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:54.242 21:21:28 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:56.159 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:56.160 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:56.160 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:14:56.418 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:14:56.418 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:14:56.418 Found net devices under 0000:0a:00.0: cvl_0_0 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:14:56.418 Found net devices under 0000:0a:00.1: cvl_0_1 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:56.418 21:21:30 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:56.418 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:56.418 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:14:56.418 00:14:56.418 --- 10.0.0.2 ping statistics --- 00:14:56.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.418 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:56.418 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:56.418 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.082 ms 00:14:56.418 00:14:56.418 --- 10.0.0.1 ping statistics --- 00:14:56.418 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:56.418 rtt min/avg/max/mdev = 0.082/0.082/0.082/0.000 ms 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=865686 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 865686 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 865686 ']' 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:56.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:56.418 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.418 [2024-07-11 21:21:31.134654] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:14:56.418 [2024-07-11 21:21:31.134728] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:56.418 EAL: No free 2048 kB hugepages reported on node 1 00:14:56.676 [2024-07-11 21:21:31.199374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:56.676 [2024-07-11 21:21:31.290112] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:56.676 [2024-07-11 21:21:31.290173] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:56.676 [2024-07-11 21:21:31.290201] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:56.676 [2024-07-11 21:21:31.290212] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:56.676 [2024-07-11 21:21:31.290222] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:56.676 [2024-07-11 21:21:31.290318] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:56.676 [2024-07-11 21:21:31.290436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:56.676 [2024-07-11 21:21:31.290688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:56.676 [2024-07-11 21:21:31.290691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.676 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 [2024-07-11 21:21:31.448694] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 Malloc0 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 Malloc1 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 [2024-07-11 21:21:31.534764] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:14:56.935 00:14:56.935 Discovery Log Number of Records 2, Generation counter 2 00:14:56.935 =====Discovery Log Entry 0====== 00:14:56.935 trtype: tcp 00:14:56.935 adrfam: ipv4 00:14:56.935 subtype: current discovery subsystem 00:14:56.935 treq: not required 00:14:56.935 portid: 0 00:14:56.935 trsvcid: 4420 00:14:56.935 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:56.935 traddr: 10.0.0.2 00:14:56.935 eflags: explicit discovery connections, duplicate discovery information 00:14:56.935 sectype: none 00:14:56.935 =====Discovery Log Entry 1====== 00:14:56.935 trtype: tcp 00:14:56.935 adrfam: ipv4 00:14:56.935 subtype: nvme subsystem 00:14:56.935 treq: not required 00:14:56.935 portid: 0 00:14:56.935 trsvcid: 4420 00:14:56.935 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:56.935 traddr: 10.0.0.2 00:14:56.935 eflags: none 00:14:56.935 sectype: none 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:56.935 21:21:31 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:57.501 21:21:32 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:57.501 21:21:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:57.501 21:21:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:57.501 21:21:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:57.501 21:21:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:57.501 21:21:32 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:15:00.023 /dev/nvme0n1 ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:15:00.023 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:15:00.281 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:15:00.281 rmmod nvme_tcp 00:15:00.281 rmmod nvme_fabrics 00:15:00.281 rmmod nvme_keyring 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 865686 ']' 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 865686 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 865686 ']' 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 865686 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 865686 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 865686' 00:15:00.281 killing process with pid 865686 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 865686 00:15:00.281 21:21:34 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 865686 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:15:00.539 21:21:35 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:15:03.070 21:21:37 nvmf_tcp.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:15:03.070 00:15:03.070 real 0m8.510s 00:15:03.070 user 0m16.263s 00:15:03.070 sys 0m2.255s 00:15:03.070 21:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:03.070 21:21:37 nvmf_tcp.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:15:03.070 ************************************ 00:15:03.070 END TEST nvmf_nvme_cli 00:15:03.070 ************************************ 00:15:03.070 21:21:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:03.070 21:21:37 nvmf_tcp -- nvmf/nvmf.sh@40 -- # [[ 1 -eq 1 ]] 00:15:03.070 21:21:37 nvmf_tcp -- nvmf/nvmf.sh@41 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:03.070 21:21:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:03.070 21:21:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:03.070 21:21:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:03.070 ************************************ 00:15:03.070 START TEST nvmf_vfio_user 00:15:03.070 ************************************ 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:15:03.070 * Looking for test storage... 00:15:03.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.070 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=866563 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 866563' 00:15:03.071 Process pid: 866563 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 866563 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 866563 ']' 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:03.071 [2024-07-11 21:21:37.470655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:03.071 [2024-07-11 21:21:37.470741] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:03.071 EAL: No free 2048 kB hugepages reported on node 1 00:15:03.071 [2024-07-11 21:21:37.536415] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:03.071 [2024-07-11 21:21:37.630689] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:03.071 [2024-07-11 21:21:37.630763] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:03.071 [2024-07-11 21:21:37.630781] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:03.071 [2024-07-11 21:21:37.630796] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:03.071 [2024-07-11 21:21:37.630808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:03.071 [2024-07-11 21:21:37.630865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:03.071 [2024-07-11 21:21:37.630896] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:03.071 [2024-07-11 21:21:37.630949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:03.071 [2024-07-11 21:21:37.630951] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:03.071 21:21:37 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:04.004 21:21:38 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:15:04.569 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:04.569 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:04.569 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:04.569 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:04.569 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:04.826 Malloc1 00:15:04.826 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:05.084 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:05.342 21:21:39 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:05.600 21:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:05.600 21:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:05.600 21:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:05.600 Malloc2 00:15:05.857 21:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:05.857 21:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:06.115 21:21:40 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:06.373 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:15:06.373 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:15:06.373 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:06.373 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:06.373 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:15:06.373 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:06.633 [2024-07-11 21:21:41.145023] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:06.633 [2024-07-11 21:21:41.145079] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid867028 ] 00:15:06.633 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.633 [2024-07-11 21:21:41.180104] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:15:06.633 [2024-07-11 21:21:41.186296] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:06.633 [2024-07-11 21:21:41.186326] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f3a2cdd2000 00:15:06.633 [2024-07-11 21:21:41.187295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.188290] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.189295] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.190297] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.191301] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.192310] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.193314] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.194321] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:06.633 [2024-07-11 21:21:41.195331] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:06.633 [2024-07-11 21:21:41.195352] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f3a2bb86000 00:15:06.633 [2024-07-11 21:21:41.196468] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:06.633 [2024-07-11 21:21:41.210477] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:15:06.633 [2024-07-11 21:21:41.210513] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:15:06.633 [2024-07-11 21:21:41.219462] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:06.633 [2024-07-11 21:21:41.219515] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:06.633 [2024-07-11 21:21:41.219608] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:15:06.633 [2024-07-11 21:21:41.219639] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:15:06.633 [2024-07-11 21:21:41.219650] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:15:06.633 [2024-07-11 21:21:41.220453] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:15:06.633 [2024-07-11 21:21:41.220473] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:15:06.633 [2024-07-11 21:21:41.220485] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:15:06.633 [2024-07-11 21:21:41.221453] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:15:06.633 [2024-07-11 21:21:41.221471] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:15:06.633 [2024-07-11 21:21:41.221484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:15:06.633 [2024-07-11 21:21:41.222461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:15:06.633 [2024-07-11 21:21:41.222478] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:06.633 [2024-07-11 21:21:41.223465] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:15:06.633 [2024-07-11 21:21:41.223484] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:15:06.633 [2024-07-11 21:21:41.223493] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:15:06.633 [2024-07-11 21:21:41.223504] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:06.633 [2024-07-11 21:21:41.223613] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:15:06.633 [2024-07-11 21:21:41.223625] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:06.633 [2024-07-11 21:21:41.223634] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:15:06.633 [2024-07-11 21:21:41.224473] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:15:06.633 [2024-07-11 21:21:41.225476] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:15:06.633 [2024-07-11 21:21:41.226488] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:06.633 [2024-07-11 21:21:41.227486] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:06.633 [2024-07-11 21:21:41.227620] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:06.633 [2024-07-11 21:21:41.228506] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:15:06.633 [2024-07-11 21:21:41.228523] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:06.633 [2024-07-11 21:21:41.228532] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:15:06.633 [2024-07-11 21:21:41.228555] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:15:06.633 [2024-07-11 21:21:41.228568] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:15:06.633 [2024-07-11 21:21:41.228594] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.633 [2024-07-11 21:21:41.228603] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.634 [2024-07-11 21:21:41.228624] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.228681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.228699] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:15:06.634 [2024-07-11 21:21:41.228710] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:15:06.634 [2024-07-11 21:21:41.228718] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:15:06.634 [2024-07-11 21:21:41.228726] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:06.634 [2024-07-11 21:21:41.228749] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:15:06.634 [2024-07-11 21:21:41.228764] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:15:06.634 [2024-07-11 21:21:41.228773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228802] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.228821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.228844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.634 [2024-07-11 21:21:41.228858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.634 [2024-07-11 21:21:41.228870] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.634 [2024-07-11 21:21:41.228883] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:06.634 [2024-07-11 21:21:41.228891] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228907] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.228936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.228947] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:15:06.634 [2024-07-11 21:21:41.228955] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228966] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228976] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.228989] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229093] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229106] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:06.634 [2024-07-11 21:21:41.229114] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:06.634 [2024-07-11 21:21:41.229123] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229156] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:15:06.634 [2024-07-11 21:21:41.229179] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229193] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229204] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.634 [2024-07-11 21:21:41.229216] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.634 [2024-07-11 21:21:41.229225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229272] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229286] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229297] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:06.634 [2024-07-11 21:21:41.229305] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.634 [2024-07-11 21:21:41.229314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229338] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229349] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229361] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229372] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229380] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229388] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229396] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:15:06.634 [2024-07-11 21:21:41.229403] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:15:06.634 [2024-07-11 21:21:41.229411] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:15:06.634 [2024-07-11 21:21:41.229438] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229475] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229502] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229529] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229565] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:06.634 [2024-07-11 21:21:41.229575] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:06.634 [2024-07-11 21:21:41.229581] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:06.634 [2024-07-11 21:21:41.229586] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:06.634 [2024-07-11 21:21:41.229595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:06.634 [2024-07-11 21:21:41.229607] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:06.634 [2024-07-11 21:21:41.229615] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:06.634 [2024-07-11 21:21:41.229623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229634] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:06.634 [2024-07-11 21:21:41.229641] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:06.634 [2024-07-11 21:21:41.229650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229661] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:06.634 [2024-07-11 21:21:41.229669] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:06.634 [2024-07-11 21:21:41.229677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:06.634 [2024-07-11 21:21:41.229688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:06.634 [2024-07-11 21:21:41.229750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:06.634 ===================================================== 00:15:06.634 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:06.634 ===================================================== 00:15:06.634 Controller Capabilities/Features 00:15:06.634 ================================ 00:15:06.634 Vendor ID: 4e58 00:15:06.634 Subsystem Vendor ID: 4e58 00:15:06.634 Serial Number: SPDK1 00:15:06.634 Model Number: SPDK bdev Controller 00:15:06.634 Firmware Version: 24.09 00:15:06.634 Recommended Arb Burst: 6 00:15:06.634 IEEE OUI Identifier: 8d 6b 50 00:15:06.634 Multi-path I/O 00:15:06.634 May have multiple subsystem ports: Yes 00:15:06.634 May have multiple controllers: Yes 00:15:06.634 Associated with SR-IOV VF: No 00:15:06.634 Max Data Transfer Size: 131072 00:15:06.635 Max Number of Namespaces: 32 00:15:06.635 Max Number of I/O Queues: 127 00:15:06.635 NVMe Specification Version (VS): 1.3 00:15:06.635 NVMe Specification Version (Identify): 1.3 00:15:06.635 Maximum Queue Entries: 256 00:15:06.635 Contiguous Queues Required: Yes 00:15:06.635 Arbitration Mechanisms Supported 00:15:06.635 Weighted Round Robin: Not Supported 00:15:06.635 Vendor Specific: Not Supported 00:15:06.635 Reset Timeout: 15000 ms 00:15:06.635 Doorbell Stride: 4 bytes 00:15:06.635 NVM Subsystem Reset: Not Supported 00:15:06.635 Command Sets Supported 00:15:06.635 NVM Command Set: Supported 00:15:06.635 Boot Partition: Not Supported 00:15:06.635 Memory Page Size Minimum: 4096 bytes 00:15:06.635 Memory Page Size Maximum: 4096 bytes 00:15:06.635 Persistent Memory Region: Not Supported 00:15:06.635 Optional Asynchronous Events Supported 00:15:06.635 Namespace Attribute Notices: Supported 00:15:06.635 Firmware Activation Notices: Not Supported 00:15:06.635 ANA Change Notices: Not Supported 00:15:06.635 PLE Aggregate Log Change Notices: Not Supported 00:15:06.635 LBA Status Info Alert Notices: Not Supported 00:15:06.635 EGE Aggregate Log Change Notices: Not Supported 00:15:06.635 Normal NVM Subsystem Shutdown event: Not Supported 00:15:06.635 Zone Descriptor Change Notices: Not Supported 00:15:06.635 Discovery Log Change Notices: Not Supported 00:15:06.635 Controller Attributes 00:15:06.635 128-bit Host Identifier: Supported 00:15:06.635 Non-Operational Permissive Mode: Not Supported 00:15:06.635 NVM Sets: Not Supported 00:15:06.635 Read Recovery Levels: Not Supported 00:15:06.635 Endurance Groups: Not Supported 00:15:06.635 Predictable Latency Mode: Not Supported 00:15:06.635 Traffic Based Keep ALive: Not Supported 00:15:06.635 Namespace Granularity: Not Supported 00:15:06.635 SQ Associations: Not Supported 00:15:06.635 UUID List: Not Supported 00:15:06.635 Multi-Domain Subsystem: Not Supported 00:15:06.635 Fixed Capacity Management: Not Supported 00:15:06.635 Variable Capacity Management: Not Supported 00:15:06.635 Delete Endurance Group: Not Supported 00:15:06.635 Delete NVM Set: Not Supported 00:15:06.635 Extended LBA Formats Supported: Not Supported 00:15:06.635 Flexible Data Placement Supported: Not Supported 00:15:06.635 00:15:06.635 Controller Memory Buffer Support 00:15:06.635 ================================ 00:15:06.635 Supported: No 00:15:06.635 00:15:06.635 Persistent Memory Region Support 00:15:06.635 ================================ 00:15:06.635 Supported: No 00:15:06.635 00:15:06.635 Admin Command Set Attributes 00:15:06.635 ============================ 00:15:06.635 Security Send/Receive: Not Supported 00:15:06.635 Format NVM: Not Supported 00:15:06.635 Firmware Activate/Download: Not Supported 00:15:06.635 Namespace Management: Not Supported 00:15:06.635 Device Self-Test: Not Supported 00:15:06.635 Directives: Not Supported 00:15:06.635 NVMe-MI: Not Supported 00:15:06.635 Virtualization Management: Not Supported 00:15:06.635 Doorbell Buffer Config: Not Supported 00:15:06.635 Get LBA Status Capability: Not Supported 00:15:06.635 Command & Feature Lockdown Capability: Not Supported 00:15:06.635 Abort Command Limit: 4 00:15:06.635 Async Event Request Limit: 4 00:15:06.635 Number of Firmware Slots: N/A 00:15:06.635 Firmware Slot 1 Read-Only: N/A 00:15:06.635 Firmware Activation Without Reset: N/A 00:15:06.635 Multiple Update Detection Support: N/A 00:15:06.635 Firmware Update Granularity: No Information Provided 00:15:06.635 Per-Namespace SMART Log: No 00:15:06.635 Asymmetric Namespace Access Log Page: Not Supported 00:15:06.635 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:15:06.635 Command Effects Log Page: Supported 00:15:06.635 Get Log Page Extended Data: Supported 00:15:06.635 Telemetry Log Pages: Not Supported 00:15:06.635 Persistent Event Log Pages: Not Supported 00:15:06.635 Supported Log Pages Log Page: May Support 00:15:06.635 Commands Supported & Effects Log Page: Not Supported 00:15:06.635 Feature Identifiers & Effects Log Page:May Support 00:15:06.635 NVMe-MI Commands & Effects Log Page: May Support 00:15:06.635 Data Area 4 for Telemetry Log: Not Supported 00:15:06.635 Error Log Page Entries Supported: 128 00:15:06.635 Keep Alive: Supported 00:15:06.635 Keep Alive Granularity: 10000 ms 00:15:06.635 00:15:06.635 NVM Command Set Attributes 00:15:06.635 ========================== 00:15:06.635 Submission Queue Entry Size 00:15:06.635 Max: 64 00:15:06.635 Min: 64 00:15:06.635 Completion Queue Entry Size 00:15:06.635 Max: 16 00:15:06.635 Min: 16 00:15:06.635 Number of Namespaces: 32 00:15:06.635 Compare Command: Supported 00:15:06.635 Write Uncorrectable Command: Not Supported 00:15:06.635 Dataset Management Command: Supported 00:15:06.635 Write Zeroes Command: Supported 00:15:06.635 Set Features Save Field: Not Supported 00:15:06.635 Reservations: Not Supported 00:15:06.635 Timestamp: Not Supported 00:15:06.635 Copy: Supported 00:15:06.635 Volatile Write Cache: Present 00:15:06.635 Atomic Write Unit (Normal): 1 00:15:06.635 Atomic Write Unit (PFail): 1 00:15:06.635 Atomic Compare & Write Unit: 1 00:15:06.635 Fused Compare & Write: Supported 00:15:06.635 Scatter-Gather List 00:15:06.635 SGL Command Set: Supported (Dword aligned) 00:15:06.635 SGL Keyed: Not Supported 00:15:06.635 SGL Bit Bucket Descriptor: Not Supported 00:15:06.635 SGL Metadata Pointer: Not Supported 00:15:06.635 Oversized SGL: Not Supported 00:15:06.635 SGL Metadata Address: Not Supported 00:15:06.635 SGL Offset: Not Supported 00:15:06.635 Transport SGL Data Block: Not Supported 00:15:06.635 Replay Protected Memory Block: Not Supported 00:15:06.635 00:15:06.635 Firmware Slot Information 00:15:06.635 ========================= 00:15:06.635 Active slot: 1 00:15:06.635 Slot 1 Firmware Revision: 24.09 00:15:06.635 00:15:06.635 00:15:06.635 Commands Supported and Effects 00:15:06.635 ============================== 00:15:06.635 Admin Commands 00:15:06.635 -------------- 00:15:06.635 Get Log Page (02h): Supported 00:15:06.635 Identify (06h): Supported 00:15:06.635 Abort (08h): Supported 00:15:06.635 Set Features (09h): Supported 00:15:06.635 Get Features (0Ah): Supported 00:15:06.635 Asynchronous Event Request (0Ch): Supported 00:15:06.635 Keep Alive (18h): Supported 00:15:06.635 I/O Commands 00:15:06.635 ------------ 00:15:06.635 Flush (00h): Supported LBA-Change 00:15:06.635 Write (01h): Supported LBA-Change 00:15:06.635 Read (02h): Supported 00:15:06.635 Compare (05h): Supported 00:15:06.635 Write Zeroes (08h): Supported LBA-Change 00:15:06.635 Dataset Management (09h): Supported LBA-Change 00:15:06.635 Copy (19h): Supported LBA-Change 00:15:06.635 00:15:06.635 Error Log 00:15:06.635 ========= 00:15:06.635 00:15:06.635 Arbitration 00:15:06.635 =========== 00:15:06.635 Arbitration Burst: 1 00:15:06.635 00:15:06.635 Power Management 00:15:06.635 ================ 00:15:06.635 Number of Power States: 1 00:15:06.635 Current Power State: Power State #0 00:15:06.635 Power State #0: 00:15:06.635 Max Power: 0.00 W 00:15:06.635 Non-Operational State: Operational 00:15:06.635 Entry Latency: Not Reported 00:15:06.635 Exit Latency: Not Reported 00:15:06.635 Relative Read Throughput: 0 00:15:06.635 Relative Read Latency: 0 00:15:06.635 Relative Write Throughput: 0 00:15:06.635 Relative Write Latency: 0 00:15:06.635 Idle Power: Not Reported 00:15:06.635 Active Power: Not Reported 00:15:06.635 Non-Operational Permissive Mode: Not Supported 00:15:06.635 00:15:06.635 Health Information 00:15:06.635 ================== 00:15:06.635 Critical Warnings: 00:15:06.635 Available Spare Space: OK 00:15:06.635 Temperature: OK 00:15:06.635 Device Reliability: OK 00:15:06.635 Read Only: No 00:15:06.635 Volatile Memory Backup: OK 00:15:06.635 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:06.636 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:06.636 Available Spare: 0% 00:15:06.636 Available Sp[2024-07-11 21:21:41.229886] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:06.636 [2024-07-11 21:21:41.229902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:06.636 [2024-07-11 21:21:41.229951] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:15:06.636 [2024-07-11 21:21:41.229969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.636 [2024-07-11 21:21:41.229981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.636 [2024-07-11 21:21:41.229991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.636 [2024-07-11 21:21:41.230001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:06.636 [2024-07-11 21:21:41.230518] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:15:06.636 [2024-07-11 21:21:41.230538] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:15:06.636 [2024-07-11 21:21:41.231516] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:06.636 [2024-07-11 21:21:41.231611] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:15:06.636 [2024-07-11 21:21:41.231626] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:15:06.636 [2024-07-11 21:21:41.232526] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:15:06.636 [2024-07-11 21:21:41.232548] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:15:06.636 [2024-07-11 21:21:41.232602] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:15:06.636 [2024-07-11 21:21:41.235764] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:06.636 are Threshold: 0% 00:15:06.636 Life Percentage Used: 0% 00:15:06.636 Data Units Read: 0 00:15:06.636 Data Units Written: 0 00:15:06.636 Host Read Commands: 0 00:15:06.636 Host Write Commands: 0 00:15:06.636 Controller Busy Time: 0 minutes 00:15:06.636 Power Cycles: 0 00:15:06.636 Power On Hours: 0 hours 00:15:06.636 Unsafe Shutdowns: 0 00:15:06.636 Unrecoverable Media Errors: 0 00:15:06.636 Lifetime Error Log Entries: 0 00:15:06.636 Warning Temperature Time: 0 minutes 00:15:06.636 Critical Temperature Time: 0 minutes 00:15:06.636 00:15:06.636 Number of Queues 00:15:06.636 ================ 00:15:06.636 Number of I/O Submission Queues: 127 00:15:06.636 Number of I/O Completion Queues: 127 00:15:06.636 00:15:06.636 Active Namespaces 00:15:06.636 ================= 00:15:06.636 Namespace ID:1 00:15:06.636 Error Recovery Timeout: Unlimited 00:15:06.636 Command Set Identifier: NVM (00h) 00:15:06.636 Deallocate: Supported 00:15:06.636 Deallocated/Unwritten Error: Not Supported 00:15:06.636 Deallocated Read Value: Unknown 00:15:06.636 Deallocate in Write Zeroes: Not Supported 00:15:06.636 Deallocated Guard Field: 0xFFFF 00:15:06.636 Flush: Supported 00:15:06.636 Reservation: Supported 00:15:06.636 Namespace Sharing Capabilities: Multiple Controllers 00:15:06.636 Size (in LBAs): 131072 (0GiB) 00:15:06.636 Capacity (in LBAs): 131072 (0GiB) 00:15:06.636 Utilization (in LBAs): 131072 (0GiB) 00:15:06.636 NGUID: 323034520AB24C5F972C8E773A5985D8 00:15:06.636 UUID: 32303452-0ab2-4c5f-972c-8e773a5985d8 00:15:06.636 Thin Provisioning: Not Supported 00:15:06.636 Per-NS Atomic Units: Yes 00:15:06.636 Atomic Boundary Size (Normal): 0 00:15:06.636 Atomic Boundary Size (PFail): 0 00:15:06.636 Atomic Boundary Offset: 0 00:15:06.636 Maximum Single Source Range Length: 65535 00:15:06.636 Maximum Copy Length: 65535 00:15:06.636 Maximum Source Range Count: 1 00:15:06.636 NGUID/EUI64 Never Reused: No 00:15:06.636 Namespace Write Protected: No 00:15:06.636 Number of LBA Formats: 1 00:15:06.636 Current LBA Format: LBA Format #00 00:15:06.636 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:06.636 00:15:06.636 21:21:41 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:06.636 EAL: No free 2048 kB hugepages reported on node 1 00:15:06.895 [2024-07-11 21:21:41.467665] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.216 Initializing NVMe Controllers 00:15:12.216 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:12.216 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:12.216 Initialization complete. Launching workers. 00:15:12.216 ======================================================== 00:15:12.216 Latency(us) 00:15:12.216 Device Information : IOPS MiB/s Average min max 00:15:12.216 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 34914.86 136.39 3665.39 1169.98 7387.01 00:15:12.216 ======================================================== 00:15:12.216 Total : 34914.86 136.39 3665.39 1169.98 7387.01 00:15:12.216 00:15:12.216 [2024-07-11 21:21:46.492631] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.216 21:21:46 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:12.216 EAL: No free 2048 kB hugepages reported on node 1 00:15:12.216 [2024-07-11 21:21:46.732831] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:17.492 Initializing NVMe Controllers 00:15:17.492 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:17.492 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:17.492 Initialization complete. Launching workers. 00:15:17.492 ======================================================== 00:15:17.492 Latency(us) 00:15:17.492 Device Information : IOPS MiB/s Average min max 00:15:17.492 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16013.71 62.55 7992.42 5944.39 14976.42 00:15:17.492 ======================================================== 00:15:17.492 Total : 16013.71 62.55 7992.42 5944.39 14976.42 00:15:17.492 00:15:17.492 [2024-07-11 21:21:51.770228] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:17.492 21:21:51 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:17.492 EAL: No free 2048 kB hugepages reported on node 1 00:15:17.492 [2024-07-11 21:21:51.971254] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:22.785 [2024-07-11 21:21:57.046127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:22.785 Initializing NVMe Controllers 00:15:22.785 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.785 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:22.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:22.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:22.785 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:22.785 Initialization complete. Launching workers. 00:15:22.785 Starting thread on core 2 00:15:22.785 Starting thread on core 3 00:15:22.785 Starting thread on core 1 00:15:22.785 21:21:57 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:22.785 EAL: No free 2048 kB hugepages reported on node 1 00:15:22.785 [2024-07-11 21:21:57.350936] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.079 [2024-07-11 21:22:00.429599] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.079 Initializing NVMe Controllers 00:15:26.079 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:26.079 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:26.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:26.079 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:26.079 Initialization complete. Launching workers. 00:15:26.079 Starting thread on core 1 with urgent priority queue 00:15:26.079 Starting thread on core 2 with urgent priority queue 00:15:26.079 Starting thread on core 3 with urgent priority queue 00:15:26.079 Starting thread on core 0 with urgent priority queue 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 0: 5122.33 IO/s 19.52 secs/100000 ios 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 1: 5970.67 IO/s 16.75 secs/100000 ios 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 2: 6179.67 IO/s 16.18 secs/100000 ios 00:15:26.079 SPDK bdev Controller (SPDK1 ) core 3: 6275.67 IO/s 15.93 secs/100000 ios 00:15:26.079 ======================================================== 00:15:26.079 00:15:26.079 21:22:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:26.079 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.079 [2024-07-11 21:22:00.731267] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:26.079 Initializing NVMe Controllers 00:15:26.079 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:26.079 Namespace ID: 1 size: 0GB 00:15:26.079 Initialization complete. 00:15:26.079 INFO: using host memory buffer for IO 00:15:26.079 Hello world! 00:15:26.079 [2024-07-11 21:22:00.768850] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:26.079 21:22:00 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:26.338 EAL: No free 2048 kB hugepages reported on node 1 00:15:26.338 [2024-07-11 21:22:01.047228] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.717 Initializing NVMe Controllers 00:15:27.718 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.718 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:27.718 Initialization complete. Launching workers. 00:15:27.718 submit (in ns) avg, min, max = 6950.7, 3491.1, 4005101.1 00:15:27.718 complete (in ns) avg, min, max = 27275.1, 2061.1, 4015341.1 00:15:27.718 00:15:27.718 Submit histogram 00:15:27.718 ================ 00:15:27.718 Range in us Cumulative Count 00:15:27.718 3.484 - 3.508: 0.0373% ( 5) 00:15:27.718 3.508 - 3.532: 0.2612% ( 30) 00:15:27.718 3.532 - 3.556: 0.8359% ( 77) 00:15:27.718 3.556 - 3.579: 3.0602% ( 298) 00:15:27.718 3.579 - 3.603: 7.4563% ( 589) 00:15:27.718 3.603 - 3.627: 13.9125% ( 865) 00:15:27.718 3.627 - 3.650: 22.7497% ( 1184) 00:15:27.718 3.650 - 3.674: 32.6392% ( 1325) 00:15:27.718 3.674 - 3.698: 41.1927% ( 1146) 00:15:27.718 3.698 - 3.721: 48.8580% ( 1027) 00:15:27.718 3.721 - 3.745: 53.9334% ( 680) 00:15:27.718 3.745 - 3.769: 58.0758% ( 555) 00:15:27.718 3.769 - 3.793: 61.6808% ( 483) 00:15:27.718 3.793 - 3.816: 65.0769% ( 455) 00:15:27.718 3.816 - 3.840: 68.3162% ( 434) 00:15:27.718 3.840 - 3.864: 72.0854% ( 505) 00:15:27.718 3.864 - 3.887: 76.2726% ( 561) 00:15:27.718 3.887 - 3.911: 80.4971% ( 566) 00:15:27.718 3.911 - 3.935: 83.9379% ( 461) 00:15:27.718 3.935 - 3.959: 86.1920% ( 302) 00:15:27.718 3.959 - 3.982: 88.0131% ( 244) 00:15:27.718 3.982 - 4.006: 89.4910% ( 198) 00:15:27.718 4.006 - 4.030: 90.6777% ( 159) 00:15:27.718 4.030 - 4.053: 91.6182% ( 126) 00:15:27.718 4.053 - 4.077: 92.4317% ( 109) 00:15:27.718 4.077 - 4.101: 93.4990% ( 143) 00:15:27.718 4.101 - 4.124: 94.3051% ( 108) 00:15:27.718 4.124 - 4.148: 95.0664% ( 102) 00:15:27.718 4.148 - 4.172: 95.5292% ( 62) 00:15:27.718 4.172 - 4.196: 95.9098% ( 51) 00:15:27.718 4.196 - 4.219: 96.2308% ( 43) 00:15:27.718 4.219 - 4.243: 96.4398% ( 28) 00:15:27.718 4.243 - 4.267: 96.5816% ( 19) 00:15:27.718 4.267 - 4.290: 96.6711% ( 12) 00:15:27.718 4.290 - 4.314: 96.7607% ( 12) 00:15:27.718 4.314 - 4.338: 96.8876% ( 17) 00:15:27.718 4.338 - 4.361: 96.9996% ( 15) 00:15:27.718 4.361 - 4.385: 97.0145% ( 2) 00:15:27.718 4.385 - 4.409: 97.1040% ( 12) 00:15:27.718 4.409 - 4.433: 97.1488% ( 6) 00:15:27.718 4.433 - 4.456: 97.2011% ( 7) 00:15:27.718 4.456 - 4.480: 97.2533% ( 7) 00:15:27.718 4.480 - 4.504: 97.2906% ( 5) 00:15:27.718 4.504 - 4.527: 97.3205% ( 4) 00:15:27.718 4.527 - 4.551: 97.3354% ( 2) 00:15:27.718 4.551 - 4.575: 97.3653% ( 4) 00:15:27.718 4.575 - 4.599: 97.3802% ( 2) 00:15:27.718 4.599 - 4.622: 97.3877% ( 1) 00:15:27.718 4.622 - 4.646: 97.3951% ( 1) 00:15:27.718 4.646 - 4.670: 97.4026% ( 1) 00:15:27.718 4.670 - 4.693: 97.4250% ( 3) 00:15:27.718 4.693 - 4.717: 97.4325% ( 1) 00:15:27.718 4.717 - 4.741: 97.4847% ( 7) 00:15:27.718 4.741 - 4.764: 97.5220% ( 5) 00:15:27.718 4.764 - 4.788: 97.5817% ( 8) 00:15:27.718 4.788 - 4.812: 97.6489% ( 9) 00:15:27.718 4.812 - 4.836: 97.6862% ( 5) 00:15:27.718 4.836 - 4.859: 97.7609% ( 10) 00:15:27.718 4.859 - 4.883: 97.7758% ( 2) 00:15:27.718 4.883 - 4.907: 97.8579% ( 11) 00:15:27.718 4.907 - 4.930: 97.8877% ( 4) 00:15:27.718 4.930 - 4.954: 97.9251% ( 5) 00:15:27.718 4.954 - 4.978: 97.9698% ( 6) 00:15:27.718 4.978 - 5.001: 97.9922% ( 3) 00:15:27.718 5.001 - 5.025: 98.0146% ( 3) 00:15:27.718 5.025 - 5.049: 98.0669% ( 7) 00:15:27.718 5.049 - 5.073: 98.0818% ( 2) 00:15:27.718 5.073 - 5.096: 98.0893% ( 1) 00:15:27.718 5.096 - 5.120: 98.0967% ( 1) 00:15:27.718 5.120 - 5.144: 98.1117% ( 2) 00:15:27.718 5.144 - 5.167: 98.1490% ( 5) 00:15:27.718 5.167 - 5.191: 98.1639% ( 2) 00:15:27.718 5.191 - 5.215: 98.1714% ( 1) 00:15:27.718 5.215 - 5.239: 98.1938% ( 3) 00:15:27.718 5.239 - 5.262: 98.2012% ( 1) 00:15:27.718 5.262 - 5.286: 98.2162% ( 2) 00:15:27.718 5.286 - 5.310: 98.2236% ( 1) 00:15:27.718 5.310 - 5.333: 98.2311% ( 1) 00:15:27.718 5.428 - 5.452: 98.2460% ( 2) 00:15:27.718 5.452 - 5.476: 98.2535% ( 1) 00:15:27.718 5.547 - 5.570: 98.2609% ( 1) 00:15:27.718 5.570 - 5.594: 98.2684% ( 1) 00:15:27.718 5.618 - 5.641: 98.2759% ( 1) 00:15:27.718 5.689 - 5.713: 98.2833% ( 1) 00:15:27.718 5.736 - 5.760: 98.2908% ( 1) 00:15:27.718 5.831 - 5.855: 98.2983% ( 1) 00:15:27.718 5.855 - 5.879: 98.3132% ( 2) 00:15:27.718 5.950 - 5.973: 98.3206% ( 1) 00:15:27.718 6.116 - 6.163: 98.3281% ( 1) 00:15:27.718 6.447 - 6.495: 98.3356% ( 1) 00:15:27.718 6.542 - 6.590: 98.3505% ( 2) 00:15:27.718 6.637 - 6.684: 98.3654% ( 2) 00:15:27.718 6.827 - 6.874: 98.3729% ( 1) 00:15:27.718 7.111 - 7.159: 98.3804% ( 1) 00:15:27.718 7.159 - 7.206: 98.4027% ( 3) 00:15:27.718 7.301 - 7.348: 98.4177% ( 2) 00:15:27.718 7.348 - 7.396: 98.4326% ( 2) 00:15:27.718 7.396 - 7.443: 98.4401% ( 1) 00:15:27.718 7.538 - 7.585: 98.4475% ( 1) 00:15:27.718 7.585 - 7.633: 98.4699% ( 3) 00:15:27.718 7.680 - 7.727: 98.4774% ( 1) 00:15:27.718 7.775 - 7.822: 98.4923% ( 2) 00:15:27.718 7.822 - 7.870: 98.4998% ( 1) 00:15:27.718 7.870 - 7.917: 98.5147% ( 2) 00:15:27.718 7.917 - 7.964: 98.5222% ( 1) 00:15:27.718 8.012 - 8.059: 98.5371% ( 2) 00:15:27.718 8.059 - 8.107: 98.5520% ( 2) 00:15:27.718 8.154 - 8.201: 98.5595% ( 1) 00:15:27.718 8.344 - 8.391: 98.5670% ( 1) 00:15:27.718 8.391 - 8.439: 98.5819% ( 2) 00:15:27.718 8.439 - 8.486: 98.5893% ( 1) 00:15:27.718 8.628 - 8.676: 98.5968% ( 1) 00:15:27.718 8.676 - 8.723: 98.6117% ( 2) 00:15:27.718 8.770 - 8.818: 98.6192% ( 1) 00:15:27.718 8.818 - 8.865: 98.6267% ( 1) 00:15:27.718 8.913 - 8.960: 98.6341% ( 1) 00:15:27.718 9.150 - 9.197: 98.6416% ( 1) 00:15:27.718 9.671 - 9.719: 98.6491% ( 1) 00:15:27.718 9.861 - 9.908: 98.6640% ( 2) 00:15:27.718 10.050 - 10.098: 98.6714% ( 1) 00:15:27.718 10.098 - 10.145: 98.6789% ( 1) 00:15:27.718 10.335 - 10.382: 98.6864% ( 1) 00:15:27.718 10.382 - 10.430: 98.6938% ( 1) 00:15:27.718 10.430 - 10.477: 98.7013% ( 1) 00:15:27.718 10.667 - 10.714: 98.7088% ( 1) 00:15:27.718 10.761 - 10.809: 98.7162% ( 1) 00:15:27.718 10.856 - 10.904: 98.7237% ( 1) 00:15:27.718 10.999 - 11.046: 98.7386% ( 2) 00:15:27.718 11.093 - 11.141: 98.7461% ( 1) 00:15:27.718 11.378 - 11.425: 98.7535% ( 1) 00:15:27.718 11.757 - 11.804: 98.7610% ( 1) 00:15:27.718 11.804 - 11.852: 98.7685% ( 1) 00:15:27.718 11.899 - 11.947: 98.7759% ( 1) 00:15:27.718 11.947 - 11.994: 98.7834% ( 1) 00:15:27.718 12.089 - 12.136: 98.7909% ( 1) 00:15:27.718 12.136 - 12.231: 98.7983% ( 1) 00:15:27.718 12.231 - 12.326: 98.8133% ( 2) 00:15:27.718 12.421 - 12.516: 98.8207% ( 1) 00:15:27.718 12.895 - 12.990: 98.8282% ( 1) 00:15:27.718 13.369 - 13.464: 98.8356% ( 1) 00:15:27.718 13.464 - 13.559: 98.8431% ( 1) 00:15:27.718 13.653 - 13.748: 98.8580% ( 2) 00:15:27.718 13.748 - 13.843: 98.8730% ( 2) 00:15:27.718 13.938 - 14.033: 98.8879% ( 2) 00:15:27.718 14.033 - 14.127: 98.8954% ( 1) 00:15:27.718 14.222 - 14.317: 98.9028% ( 1) 00:15:27.718 14.412 - 14.507: 98.9177% ( 2) 00:15:27.718 14.601 - 14.696: 98.9327% ( 2) 00:15:27.718 14.981 - 15.076: 98.9401% ( 1) 00:15:27.718 15.076 - 15.170: 98.9476% ( 1) 00:15:27.718 15.170 - 15.265: 98.9551% ( 1) 00:15:27.718 17.256 - 17.351: 98.9625% ( 1) 00:15:27.718 17.351 - 17.446: 98.9775% ( 2) 00:15:27.718 17.446 - 17.541: 98.9999% ( 3) 00:15:27.718 17.541 - 17.636: 99.0297% ( 4) 00:15:27.718 17.636 - 17.730: 99.0670% ( 5) 00:15:27.718 17.730 - 17.825: 99.1118% ( 6) 00:15:27.718 17.825 - 17.920: 99.1491% ( 5) 00:15:27.718 17.920 - 18.015: 99.2088% ( 8) 00:15:27.718 18.015 - 18.110: 99.2760% ( 9) 00:15:27.718 18.110 - 18.204: 99.3805% ( 14) 00:15:27.718 18.204 - 18.299: 99.4850% ( 14) 00:15:27.718 18.299 - 18.394: 99.5372% ( 7) 00:15:27.718 18.394 - 18.489: 99.5895% ( 7) 00:15:27.718 18.489 - 18.584: 99.6343% ( 6) 00:15:27.718 18.584 - 18.679: 99.6791% ( 6) 00:15:27.718 18.679 - 18.773: 99.7238% ( 6) 00:15:27.718 18.773 - 18.868: 99.7686% ( 6) 00:15:27.718 18.868 - 18.963: 99.7835% ( 2) 00:15:27.718 18.963 - 19.058: 99.7985% ( 2) 00:15:27.718 19.058 - 19.153: 99.8059% ( 1) 00:15:27.718 19.153 - 19.247: 99.8209% ( 2) 00:15:27.718 19.247 - 19.342: 99.8283% ( 1) 00:15:27.718 19.342 - 19.437: 99.8358% ( 1) 00:15:27.718 19.437 - 19.532: 99.8507% ( 2) 00:15:27.718 19.721 - 19.816: 99.8582% ( 1) 00:15:27.718 22.187 - 22.281: 99.8657% ( 1) 00:15:27.718 22.756 - 22.850: 99.8731% ( 1) 00:15:27.718 23.893 - 23.988: 99.8806% ( 1) 00:15:27.718 24.841 - 25.031: 99.8880% ( 1) 00:15:27.718 25.221 - 25.410: 99.8955% ( 1) 00:15:27.718 25.600 - 25.790: 99.9030% ( 1) 00:15:27.718 25.790 - 25.979: 99.9104% ( 1) 00:15:27.718 28.065 - 28.255: 99.9179% ( 1) 00:15:27.718 35.461 - 35.650: 99.9254% ( 1) 00:15:27.718 3980.705 - 4004.978: 99.9925% ( 9) 00:15:27.718 4004.978 - 4029.250: 100.0000% ( 1) 00:15:27.718 00:15:27.719 Complete histogram 00:15:27.719 ================== 00:15:27.719 Range in us Cumulative Count 00:15:27.719 2.050 - 2.062: 0.0224% ( 3) 00:15:27.719 2.062 - 2.074: 17.5250% ( 2345) 00:15:27.719 2.074 - 2.086: 43.1034% ( 3427) 00:15:27.719 2.086 - 2.098: 45.5441% ( 327) 00:15:27.719 2.098 - 2.110: 55.4411% ( 1326) 00:15:27.719 2.110 - 2.121: 60.7628% ( 713) 00:15:27.719 2.121 - 2.133: 62.2705% ( 202) 00:15:27.719 2.133 - 2.145: 70.7270% ( 1133) 00:15:27.719 2.145 - 2.157: 76.1905% ( 732) 00:15:27.719 2.157 - 2.169: 77.3250% ( 152) 00:15:27.719 2.169 - 2.181: 80.3105% ( 400) 00:15:27.719 2.181 - 2.193: 81.9077% ( 214) 00:15:27.719 2.193 - 2.204: 82.3929% ( 65) 00:15:27.719 2.204 - 2.216: 85.8636% ( 465) 00:15:27.719 2.216 - 2.228: 88.9163% ( 409) 00:15:27.719 2.228 - 2.240: 90.9464% ( 272) 00:15:27.719 2.240 - 2.252: 92.6034% ( 222) 00:15:27.719 2.252 - 2.264: 93.3199% ( 96) 00:15:27.719 2.264 - 2.276: 93.6259% ( 41) 00:15:27.719 2.276 - 2.287: 93.9991% ( 50) 00:15:27.719 2.287 - 2.299: 94.5962% ( 80) 00:15:27.719 2.299 - 2.311: 95.1560% ( 75) 00:15:27.719 2.311 - 2.323: 95.3351% ( 24) 00:15:27.719 2.323 - 2.335: 95.4098% ( 10) 00:15:27.719 2.335 - 2.347: 95.4471% ( 5) 00:15:27.719 2.347 - 2.359: 95.5590% ( 15) 00:15:27.719 2.359 - 2.370: 95.8352% ( 37) 00:15:27.719 2.370 - 2.382: 96.1636% ( 44) 00:15:27.719 2.382 - 2.394: 96.6338% ( 63) 00:15:27.719 2.394 - 2.406: 96.9025% ( 36) 00:15:27.719 2.406 - 2.418: 97.1040% ( 27) 00:15:27.719 2.418 - 2.430: 97.2608% ( 21) 00:15:27.719 2.430 - 2.441: 97.4250% ( 22) 00:15:27.719 2.441 - 2.453: 97.5892% ( 22) 00:15:27.719 2.453 - 2.465: 97.6937% ( 14) 00:15:27.719 2.465 - 2.477: 97.8206% ( 17) 00:15:27.719 2.477 - 2.489: 97.9101% ( 12) 00:15:27.719 2.489 - 2.501: 98.0072% ( 13) 00:15:27.719 2.501 - 2.513: 98.1117% ( 14) 00:15:27.719 2.513 - 2.524: 98.1639% ( 7) 00:15:27.719 2.524 - 2.536: 98.2087% ( 6) 00:15:27.719 2.536 - 2.548: 98.2311% ( 3) 00:15:27.719 2.548 - 2.560: 98.2460% ( 2) 00:15:27.719 2.560 - 2.572: 98.2684% ( 3) 00:15:27.719 2.572 - 2.584: 98.2759% ( 1) 00:15:27.719 2.596 - 2.607: 98.2833% ( 1) 00:15:27.719 2.607 - 2.619: 98.2908% ( 1) 00:15:27.719 2.631 - 2.643: 98.3132% ( 3) 00:15:27.719 2.643 - 2.655: 98.3206% ( 1) 00:15:27.719 2.667 - 2.679: 98.3281% ( 1) 00:15:27.719 2.750 - 2.761: 98.3430% ( 2) 00:15:27.719 2.809 - 2.821: 98.3505% ( 1) 00:15:27.719 2.821 - 2.833: 98.3654% ( 2) 00:15:27.719 2.856 - 2.868: 98.3729% ( 1) 00:15:27.719 2.904 - 2.916: 98.3804% ( 1) 00:15:27.719 2.975 - 2.987: 98.3878% ( 1) 00:15:27.719 3.034 - 3.058: 98.3953% ( 1) 00:15:27.719 3.200 - 3.224: 98.4027% ( 1) 00:15:27.719 3.224 - 3.247: 98.4102% ( 1) 00:15:27.719 3.271 - 3.295: 98.4177% ( 1) 00:15:27.719 3.295 - 3.319: 98.4251% ( 1) 00:15:27.719 3.342 - 3.366: 98.4326% ( 1) 00:15:27.719 3.390 - 3.413: 98.4401% ( 1) 00:15:27.719 3.413 - 3.437: 98.4550% ( 2) 00:15:27.719 3.437 - 3.461: 9[2024-07-11 21:22:02.068474] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:27.719 8.4774% ( 3) 00:15:27.719 3.532 - 3.556: 98.4848% ( 1) 00:15:27.719 3.556 - 3.579: 98.4998% ( 2) 00:15:27.719 3.579 - 3.603: 98.5072% ( 1) 00:15:27.719 3.603 - 3.627: 98.5147% ( 1) 00:15:27.719 3.650 - 3.674: 98.5222% ( 1) 00:15:27.719 3.674 - 3.698: 98.5446% ( 3) 00:15:27.719 3.698 - 3.721: 98.5595% ( 2) 00:15:27.719 3.745 - 3.769: 98.5670% ( 1) 00:15:27.719 3.793 - 3.816: 98.5744% ( 1) 00:15:27.719 3.864 - 3.887: 98.5893% ( 2) 00:15:27.719 3.911 - 3.935: 98.5968% ( 1) 00:15:27.719 3.982 - 4.006: 98.6117% ( 2) 00:15:27.719 4.006 - 4.030: 98.6192% ( 1) 00:15:27.719 5.262 - 5.286: 98.6267% ( 1) 00:15:27.719 5.286 - 5.310: 98.6341% ( 1) 00:15:27.719 5.357 - 5.381: 98.6491% ( 2) 00:15:27.719 5.381 - 5.404: 98.6565% ( 1) 00:15:27.719 5.570 - 5.594: 98.6640% ( 1) 00:15:27.719 5.594 - 5.618: 98.6714% ( 1) 00:15:27.719 5.713 - 5.736: 98.6938% ( 3) 00:15:27.719 5.902 - 5.926: 98.7013% ( 1) 00:15:27.719 5.926 - 5.950: 98.7088% ( 1) 00:15:27.719 5.950 - 5.973: 98.7162% ( 1) 00:15:27.719 6.021 - 6.044: 98.7237% ( 1) 00:15:27.719 6.116 - 6.163: 98.7312% ( 1) 00:15:27.719 6.163 - 6.210: 98.7386% ( 1) 00:15:27.719 6.305 - 6.353: 98.7535% ( 2) 00:15:27.719 6.400 - 6.447: 98.7685% ( 2) 00:15:27.719 6.637 - 6.684: 98.7759% ( 1) 00:15:27.719 6.684 - 6.732: 98.7834% ( 1) 00:15:27.719 7.111 - 7.159: 98.7909% ( 1) 00:15:27.719 7.964 - 8.012: 98.7983% ( 1) 00:15:27.719 8.913 - 8.960: 98.8058% ( 1) 00:15:27.719 11.994 - 12.041: 98.8133% ( 1) 00:15:27.719 15.455 - 15.550: 98.8282% ( 2) 00:15:27.719 15.550 - 15.644: 98.8431% ( 2) 00:15:27.719 15.644 - 15.739: 98.8655% ( 3) 00:15:27.719 15.739 - 15.834: 98.8730% ( 1) 00:15:27.719 15.834 - 15.929: 98.8804% ( 1) 00:15:27.719 15.929 - 16.024: 98.9252% ( 6) 00:15:27.719 16.024 - 16.119: 98.9401% ( 2) 00:15:27.719 16.119 - 16.213: 98.9476% ( 1) 00:15:27.719 16.213 - 16.308: 98.9999% ( 7) 00:15:27.719 16.308 - 16.403: 99.0372% ( 5) 00:15:27.719 16.403 - 16.498: 99.0670% ( 4) 00:15:27.719 16.498 - 16.593: 99.1043% ( 5) 00:15:27.719 16.593 - 16.687: 99.1417% ( 5) 00:15:27.719 16.687 - 16.782: 99.2014% ( 8) 00:15:27.719 16.782 - 16.877: 99.2462% ( 6) 00:15:27.719 16.877 - 16.972: 99.2909% ( 6) 00:15:27.719 16.972 - 17.067: 99.2984% ( 1) 00:15:27.719 17.161 - 17.256: 99.3059% ( 1) 00:15:27.719 17.256 - 17.351: 99.3133% ( 1) 00:15:27.719 17.351 - 17.446: 99.3208% ( 1) 00:15:27.719 17.446 - 17.541: 99.3357% ( 2) 00:15:27.719 17.920 - 18.015: 99.3506% ( 2) 00:15:27.719 18.868 - 18.963: 99.3581% ( 1) 00:15:27.719 20.196 - 20.290: 99.3656% ( 1) 00:15:27.719 27.876 - 28.065: 99.3730% ( 1) 00:15:27.719 3859.342 - 3883.615: 99.3880% ( 2) 00:15:27.719 3980.705 - 4004.978: 99.9254% ( 72) 00:15:27.719 4004.978 - 4029.250: 100.0000% ( 10) 00:15:27.719 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:27.719 [ 00:15:27.719 { 00:15:27.719 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:27.719 "subtype": "Discovery", 00:15:27.719 "listen_addresses": [], 00:15:27.719 "allow_any_host": true, 00:15:27.719 "hosts": [] 00:15:27.719 }, 00:15:27.719 { 00:15:27.719 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:27.719 "subtype": "NVMe", 00:15:27.719 "listen_addresses": [ 00:15:27.719 { 00:15:27.719 "trtype": "VFIOUSER", 00:15:27.719 "adrfam": "IPv4", 00:15:27.719 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:27.719 "trsvcid": "0" 00:15:27.719 } 00:15:27.719 ], 00:15:27.719 "allow_any_host": true, 00:15:27.719 "hosts": [], 00:15:27.719 "serial_number": "SPDK1", 00:15:27.719 "model_number": "SPDK bdev Controller", 00:15:27.719 "max_namespaces": 32, 00:15:27.719 "min_cntlid": 1, 00:15:27.719 "max_cntlid": 65519, 00:15:27.719 "namespaces": [ 00:15:27.719 { 00:15:27.719 "nsid": 1, 00:15:27.719 "bdev_name": "Malloc1", 00:15:27.719 "name": "Malloc1", 00:15:27.719 "nguid": "323034520AB24C5F972C8E773A5985D8", 00:15:27.719 "uuid": "32303452-0ab2-4c5f-972c-8e773a5985d8" 00:15:27.719 } 00:15:27.719 ] 00:15:27.719 }, 00:15:27.719 { 00:15:27.719 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:27.719 "subtype": "NVMe", 00:15:27.719 "listen_addresses": [ 00:15:27.719 { 00:15:27.719 "trtype": "VFIOUSER", 00:15:27.719 "adrfam": "IPv4", 00:15:27.719 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:27.719 "trsvcid": "0" 00:15:27.719 } 00:15:27.719 ], 00:15:27.719 "allow_any_host": true, 00:15:27.719 "hosts": [], 00:15:27.719 "serial_number": "SPDK2", 00:15:27.719 "model_number": "SPDK bdev Controller", 00:15:27.719 "max_namespaces": 32, 00:15:27.719 "min_cntlid": 1, 00:15:27.719 "max_cntlid": 65519, 00:15:27.719 "namespaces": [ 00:15:27.719 { 00:15:27.719 "nsid": 1, 00:15:27.719 "bdev_name": "Malloc2", 00:15:27.719 "name": "Malloc2", 00:15:27.719 "nguid": "00DFAE4C19134343B1C0A338883518BD", 00:15:27.719 "uuid": "00dfae4c-1913-4343-b1c0-a338883518bd" 00:15:27.719 } 00:15:27.719 ] 00:15:27.719 } 00:15:27.719 ] 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=869548 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:27.719 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:27.720 21:22:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:27.720 21:22:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.720 21:22:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:27.720 21:22:02 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:27.720 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:27.720 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:27.720 EAL: No free 2048 kB hugepages reported on node 1 00:15:27.978 [2024-07-11 21:22:02.533251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:27.978 Malloc3 00:15:27.978 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:28.236 [2024-07-11 21:22:02.886798] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:28.236 21:22:02 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:28.236 Asynchronous Event Request test 00:15:28.236 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.236 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:28.236 Registering asynchronous event callbacks... 00:15:28.236 Starting namespace attribute notice tests for all controllers... 00:15:28.236 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:28.236 aer_cb - Changed Namespace 00:15:28.236 Cleaning up... 00:15:28.495 [ 00:15:28.495 { 00:15:28.495 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:28.495 "subtype": "Discovery", 00:15:28.495 "listen_addresses": [], 00:15:28.495 "allow_any_host": true, 00:15:28.495 "hosts": [] 00:15:28.495 }, 00:15:28.495 { 00:15:28.495 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:28.495 "subtype": "NVMe", 00:15:28.495 "listen_addresses": [ 00:15:28.495 { 00:15:28.495 "trtype": "VFIOUSER", 00:15:28.495 "adrfam": "IPv4", 00:15:28.495 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:28.495 "trsvcid": "0" 00:15:28.495 } 00:15:28.495 ], 00:15:28.495 "allow_any_host": true, 00:15:28.495 "hosts": [], 00:15:28.495 "serial_number": "SPDK1", 00:15:28.495 "model_number": "SPDK bdev Controller", 00:15:28.495 "max_namespaces": 32, 00:15:28.495 "min_cntlid": 1, 00:15:28.495 "max_cntlid": 65519, 00:15:28.496 "namespaces": [ 00:15:28.496 { 00:15:28.496 "nsid": 1, 00:15:28.496 "bdev_name": "Malloc1", 00:15:28.496 "name": "Malloc1", 00:15:28.496 "nguid": "323034520AB24C5F972C8E773A5985D8", 00:15:28.496 "uuid": "32303452-0ab2-4c5f-972c-8e773a5985d8" 00:15:28.496 }, 00:15:28.496 { 00:15:28.496 "nsid": 2, 00:15:28.496 "bdev_name": "Malloc3", 00:15:28.496 "name": "Malloc3", 00:15:28.496 "nguid": "FB4CD22099E84BB89AD2DA58C81B2D5D", 00:15:28.496 "uuid": "fb4cd220-99e8-4bb8-9ad2-da58c81b2d5d" 00:15:28.496 } 00:15:28.496 ] 00:15:28.496 }, 00:15:28.496 { 00:15:28.496 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:28.496 "subtype": "NVMe", 00:15:28.496 "listen_addresses": [ 00:15:28.496 { 00:15:28.496 "trtype": "VFIOUSER", 00:15:28.496 "adrfam": "IPv4", 00:15:28.496 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:28.496 "trsvcid": "0" 00:15:28.496 } 00:15:28.496 ], 00:15:28.496 "allow_any_host": true, 00:15:28.496 "hosts": [], 00:15:28.496 "serial_number": "SPDK2", 00:15:28.496 "model_number": "SPDK bdev Controller", 00:15:28.496 "max_namespaces": 32, 00:15:28.496 "min_cntlid": 1, 00:15:28.496 "max_cntlid": 65519, 00:15:28.496 "namespaces": [ 00:15:28.496 { 00:15:28.496 "nsid": 1, 00:15:28.496 "bdev_name": "Malloc2", 00:15:28.496 "name": "Malloc2", 00:15:28.496 "nguid": "00DFAE4C19134343B1C0A338883518BD", 00:15:28.496 "uuid": "00dfae4c-1913-4343-b1c0-a338883518bd" 00:15:28.496 } 00:15:28.496 ] 00:15:28.496 } 00:15:28.496 ] 00:15:28.496 21:22:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 869548 00:15:28.496 21:22:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:28.496 21:22:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:28.496 21:22:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:28.496 21:22:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:28.496 [2024-07-11 21:22:03.161150] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:28.496 [2024-07-11 21:22:03.161200] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid869561 ] 00:15:28.496 EAL: No free 2048 kB hugepages reported on node 1 00:15:28.496 [2024-07-11 21:22:03.193817] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:28.496 [2024-07-11 21:22:03.199171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:28.496 [2024-07-11 21:22:03.199203] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f60b6e8f000 00:15:28.496 [2024-07-11 21:22:03.200172] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.201181] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.202186] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.203184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.204191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.205196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.206198] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.207207] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:28.496 [2024-07-11 21:22:03.208217] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:28.496 [2024-07-11 21:22:03.208238] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f60b5c43000 00:15:28.496 [2024-07-11 21:22:03.209376] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:28.496 [2024-07-11 21:22:03.224101] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:28.496 [2024-07-11 21:22:03.224140] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:28.496 [2024-07-11 21:22:03.226243] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:28.496 [2024-07-11 21:22:03.226294] nvme_pcie_common.c: 132:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:28.496 [2024-07-11 21:22:03.226380] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:28.496 [2024-07-11 21:22:03.226403] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:28.496 [2024-07-11 21:22:03.226412] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:28.496 [2024-07-11 21:22:03.227249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:28.496 [2024-07-11 21:22:03.227269] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:28.496 [2024-07-11 21:22:03.227281] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:28.496 [2024-07-11 21:22:03.228254] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:28.496 [2024-07-11 21:22:03.228275] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:28.496 [2024-07-11 21:22:03.228288] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:28.496 [2024-07-11 21:22:03.229264] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:28.496 [2024-07-11 21:22:03.229284] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:28.496 [2024-07-11 21:22:03.230269] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:28.496 [2024-07-11 21:22:03.230289] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:28.496 [2024-07-11 21:22:03.230298] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:28.496 [2024-07-11 21:22:03.230310] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:28.496 [2024-07-11 21:22:03.230419] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:28.496 [2024-07-11 21:22:03.230427] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:28.496 [2024-07-11 21:22:03.230435] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:28.496 [2024-07-11 21:22:03.231282] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:28.496 [2024-07-11 21:22:03.232285] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:28.496 [2024-07-11 21:22:03.234767] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:28.496 [2024-07-11 21:22:03.235296] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:28.496 [2024-07-11 21:22:03.235378] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:28.496 [2024-07-11 21:22:03.236316] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:28.496 [2024-07-11 21:22:03.236336] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:28.496 [2024-07-11 21:22:03.236345] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.236368] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:28.496 [2024-07-11 21:22:03.236381] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.236402] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:28.496 [2024-07-11 21:22:03.236412] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.496 [2024-07-11 21:22:03.236432] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.496 [2024-07-11 21:22:03.245766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:28.496 [2024-07-11 21:22:03.245791] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:28.496 [2024-07-11 21:22:03.245805] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:28.496 [2024-07-11 21:22:03.245813] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:28.496 [2024-07-11 21:22:03.245821] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:28.496 [2024-07-11 21:22:03.245829] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:28.496 [2024-07-11 21:22:03.245837] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:28.496 [2024-07-11 21:22:03.245845] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.245859] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.245875] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:28.496 [2024-07-11 21:22:03.253761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:28.496 [2024-07-11 21:22:03.253789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.496 [2024-07-11 21:22:03.253803] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.496 [2024-07-11 21:22:03.253816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.496 [2024-07-11 21:22:03.253827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:28.496 [2024-07-11 21:22:03.253836] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.253852] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.253866] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:28.496 [2024-07-11 21:22:03.261767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:28.496 [2024-07-11 21:22:03.261786] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:28.496 [2024-07-11 21:22:03.261796] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.261807] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.261817] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:28.496 [2024-07-11 21:22:03.261831] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.269781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.269862] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.269878] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.269891] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:28.756 [2024-07-11 21:22:03.269899] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:28.756 [2024-07-11 21:22:03.269909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.277781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.277805] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:28.756 [2024-07-11 21:22:03.277820] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.277835] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.277847] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:28.756 [2024-07-11 21:22:03.277855] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.756 [2024-07-11 21:22:03.277865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.285766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.285795] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.285810] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.285823] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:28.756 [2024-07-11 21:22:03.285831] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.756 [2024-07-11 21:22:03.285841] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.293764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.293786] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293812] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293823] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293831] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293840] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293852] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:28.756 [2024-07-11 21:22:03.293860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:28.756 [2024-07-11 21:22:03.293868] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:28.756 [2024-07-11 21:22:03.293895] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.301782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.301808] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.309765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.309792] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.317763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.317799] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.325766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.325809] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:28.756 [2024-07-11 21:22:03.325821] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:28.756 [2024-07-11 21:22:03.325828] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:28.756 [2024-07-11 21:22:03.325834] nvme_pcie_common.c:1254:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:28.756 [2024-07-11 21:22:03.325843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:28.756 [2024-07-11 21:22:03.325855] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:28.756 [2024-07-11 21:22:03.325863] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:28.756 [2024-07-11 21:22:03.325873] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.325884] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:28.756 [2024-07-11 21:22:03.325892] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:28.756 [2024-07-11 21:22:03.325901] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.325913] nvme_pcie_common.c:1201:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:28.756 [2024-07-11 21:22:03.325921] nvme_pcie_common.c:1229:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:28.756 [2024-07-11 21:22:03.325930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:28.756 [2024-07-11 21:22:03.333767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.333797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.333818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:28.756 [2024-07-11 21:22:03.333831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:28.756 ===================================================== 00:15:28.756 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:28.756 ===================================================== 00:15:28.756 Controller Capabilities/Features 00:15:28.756 ================================ 00:15:28.756 Vendor ID: 4e58 00:15:28.756 Subsystem Vendor ID: 4e58 00:15:28.756 Serial Number: SPDK2 00:15:28.756 Model Number: SPDK bdev Controller 00:15:28.756 Firmware Version: 24.09 00:15:28.756 Recommended Arb Burst: 6 00:15:28.756 IEEE OUI Identifier: 8d 6b 50 00:15:28.756 Multi-path I/O 00:15:28.756 May have multiple subsystem ports: Yes 00:15:28.756 May have multiple controllers: Yes 00:15:28.756 Associated with SR-IOV VF: No 00:15:28.756 Max Data Transfer Size: 131072 00:15:28.756 Max Number of Namespaces: 32 00:15:28.756 Max Number of I/O Queues: 127 00:15:28.756 NVMe Specification Version (VS): 1.3 00:15:28.756 NVMe Specification Version (Identify): 1.3 00:15:28.756 Maximum Queue Entries: 256 00:15:28.756 Contiguous Queues Required: Yes 00:15:28.756 Arbitration Mechanisms Supported 00:15:28.756 Weighted Round Robin: Not Supported 00:15:28.756 Vendor Specific: Not Supported 00:15:28.756 Reset Timeout: 15000 ms 00:15:28.756 Doorbell Stride: 4 bytes 00:15:28.756 NVM Subsystem Reset: Not Supported 00:15:28.756 Command Sets Supported 00:15:28.756 NVM Command Set: Supported 00:15:28.756 Boot Partition: Not Supported 00:15:28.756 Memory Page Size Minimum: 4096 bytes 00:15:28.756 Memory Page Size Maximum: 4096 bytes 00:15:28.756 Persistent Memory Region: Not Supported 00:15:28.756 Optional Asynchronous Events Supported 00:15:28.756 Namespace Attribute Notices: Supported 00:15:28.756 Firmware Activation Notices: Not Supported 00:15:28.756 ANA Change Notices: Not Supported 00:15:28.756 PLE Aggregate Log Change Notices: Not Supported 00:15:28.756 LBA Status Info Alert Notices: Not Supported 00:15:28.757 EGE Aggregate Log Change Notices: Not Supported 00:15:28.757 Normal NVM Subsystem Shutdown event: Not Supported 00:15:28.757 Zone Descriptor Change Notices: Not Supported 00:15:28.757 Discovery Log Change Notices: Not Supported 00:15:28.757 Controller Attributes 00:15:28.757 128-bit Host Identifier: Supported 00:15:28.757 Non-Operational Permissive Mode: Not Supported 00:15:28.757 NVM Sets: Not Supported 00:15:28.757 Read Recovery Levels: Not Supported 00:15:28.757 Endurance Groups: Not Supported 00:15:28.757 Predictable Latency Mode: Not Supported 00:15:28.757 Traffic Based Keep ALive: Not Supported 00:15:28.757 Namespace Granularity: Not Supported 00:15:28.757 SQ Associations: Not Supported 00:15:28.757 UUID List: Not Supported 00:15:28.757 Multi-Domain Subsystem: Not Supported 00:15:28.757 Fixed Capacity Management: Not Supported 00:15:28.757 Variable Capacity Management: Not Supported 00:15:28.757 Delete Endurance Group: Not Supported 00:15:28.757 Delete NVM Set: Not Supported 00:15:28.757 Extended LBA Formats Supported: Not Supported 00:15:28.757 Flexible Data Placement Supported: Not Supported 00:15:28.757 00:15:28.757 Controller Memory Buffer Support 00:15:28.757 ================================ 00:15:28.757 Supported: No 00:15:28.757 00:15:28.757 Persistent Memory Region Support 00:15:28.757 ================================ 00:15:28.757 Supported: No 00:15:28.757 00:15:28.757 Admin Command Set Attributes 00:15:28.757 ============================ 00:15:28.757 Security Send/Receive: Not Supported 00:15:28.757 Format NVM: Not Supported 00:15:28.757 Firmware Activate/Download: Not Supported 00:15:28.757 Namespace Management: Not Supported 00:15:28.757 Device Self-Test: Not Supported 00:15:28.757 Directives: Not Supported 00:15:28.757 NVMe-MI: Not Supported 00:15:28.757 Virtualization Management: Not Supported 00:15:28.757 Doorbell Buffer Config: Not Supported 00:15:28.757 Get LBA Status Capability: Not Supported 00:15:28.757 Command & Feature Lockdown Capability: Not Supported 00:15:28.757 Abort Command Limit: 4 00:15:28.757 Async Event Request Limit: 4 00:15:28.757 Number of Firmware Slots: N/A 00:15:28.757 Firmware Slot 1 Read-Only: N/A 00:15:28.757 Firmware Activation Without Reset: N/A 00:15:28.757 Multiple Update Detection Support: N/A 00:15:28.757 Firmware Update Granularity: No Information Provided 00:15:28.757 Per-Namespace SMART Log: No 00:15:28.757 Asymmetric Namespace Access Log Page: Not Supported 00:15:28.757 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:28.757 Command Effects Log Page: Supported 00:15:28.757 Get Log Page Extended Data: Supported 00:15:28.757 Telemetry Log Pages: Not Supported 00:15:28.757 Persistent Event Log Pages: Not Supported 00:15:28.757 Supported Log Pages Log Page: May Support 00:15:28.757 Commands Supported & Effects Log Page: Not Supported 00:15:28.757 Feature Identifiers & Effects Log Page:May Support 00:15:28.757 NVMe-MI Commands & Effects Log Page: May Support 00:15:28.757 Data Area 4 for Telemetry Log: Not Supported 00:15:28.757 Error Log Page Entries Supported: 128 00:15:28.757 Keep Alive: Supported 00:15:28.757 Keep Alive Granularity: 10000 ms 00:15:28.757 00:15:28.757 NVM Command Set Attributes 00:15:28.757 ========================== 00:15:28.757 Submission Queue Entry Size 00:15:28.757 Max: 64 00:15:28.757 Min: 64 00:15:28.757 Completion Queue Entry Size 00:15:28.757 Max: 16 00:15:28.757 Min: 16 00:15:28.757 Number of Namespaces: 32 00:15:28.757 Compare Command: Supported 00:15:28.757 Write Uncorrectable Command: Not Supported 00:15:28.757 Dataset Management Command: Supported 00:15:28.757 Write Zeroes Command: Supported 00:15:28.757 Set Features Save Field: Not Supported 00:15:28.757 Reservations: Not Supported 00:15:28.757 Timestamp: Not Supported 00:15:28.757 Copy: Supported 00:15:28.757 Volatile Write Cache: Present 00:15:28.757 Atomic Write Unit (Normal): 1 00:15:28.757 Atomic Write Unit (PFail): 1 00:15:28.757 Atomic Compare & Write Unit: 1 00:15:28.757 Fused Compare & Write: Supported 00:15:28.757 Scatter-Gather List 00:15:28.757 SGL Command Set: Supported (Dword aligned) 00:15:28.757 SGL Keyed: Not Supported 00:15:28.757 SGL Bit Bucket Descriptor: Not Supported 00:15:28.757 SGL Metadata Pointer: Not Supported 00:15:28.757 Oversized SGL: Not Supported 00:15:28.757 SGL Metadata Address: Not Supported 00:15:28.757 SGL Offset: Not Supported 00:15:28.757 Transport SGL Data Block: Not Supported 00:15:28.757 Replay Protected Memory Block: Not Supported 00:15:28.757 00:15:28.757 Firmware Slot Information 00:15:28.757 ========================= 00:15:28.757 Active slot: 1 00:15:28.757 Slot 1 Firmware Revision: 24.09 00:15:28.757 00:15:28.757 00:15:28.757 Commands Supported and Effects 00:15:28.757 ============================== 00:15:28.757 Admin Commands 00:15:28.757 -------------- 00:15:28.757 Get Log Page (02h): Supported 00:15:28.757 Identify (06h): Supported 00:15:28.757 Abort (08h): Supported 00:15:28.757 Set Features (09h): Supported 00:15:28.757 Get Features (0Ah): Supported 00:15:28.757 Asynchronous Event Request (0Ch): Supported 00:15:28.757 Keep Alive (18h): Supported 00:15:28.757 I/O Commands 00:15:28.757 ------------ 00:15:28.757 Flush (00h): Supported LBA-Change 00:15:28.757 Write (01h): Supported LBA-Change 00:15:28.757 Read (02h): Supported 00:15:28.757 Compare (05h): Supported 00:15:28.757 Write Zeroes (08h): Supported LBA-Change 00:15:28.757 Dataset Management (09h): Supported LBA-Change 00:15:28.757 Copy (19h): Supported LBA-Change 00:15:28.757 00:15:28.757 Error Log 00:15:28.757 ========= 00:15:28.757 00:15:28.757 Arbitration 00:15:28.757 =========== 00:15:28.757 Arbitration Burst: 1 00:15:28.757 00:15:28.757 Power Management 00:15:28.757 ================ 00:15:28.757 Number of Power States: 1 00:15:28.757 Current Power State: Power State #0 00:15:28.757 Power State #0: 00:15:28.757 Max Power: 0.00 W 00:15:28.757 Non-Operational State: Operational 00:15:28.757 Entry Latency: Not Reported 00:15:28.757 Exit Latency: Not Reported 00:15:28.757 Relative Read Throughput: 0 00:15:28.757 Relative Read Latency: 0 00:15:28.757 Relative Write Throughput: 0 00:15:28.757 Relative Write Latency: 0 00:15:28.757 Idle Power: Not Reported 00:15:28.757 Active Power: Not Reported 00:15:28.757 Non-Operational Permissive Mode: Not Supported 00:15:28.757 00:15:28.757 Health Information 00:15:28.757 ================== 00:15:28.757 Critical Warnings: 00:15:28.757 Available Spare Space: OK 00:15:28.757 Temperature: OK 00:15:28.757 Device Reliability: OK 00:15:28.757 Read Only: No 00:15:28.757 Volatile Memory Backup: OK 00:15:28.757 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:28.757 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:28.757 Available Spare: 0% 00:15:28.757 Available Sp[2024-07-11 21:22:03.333950] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:28.757 [2024-07-11 21:22:03.341763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:28.757 [2024-07-11 21:22:03.341843] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:28.757 [2024-07-11 21:22:03.341862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.757 [2024-07-11 21:22:03.341874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.757 [2024-07-11 21:22:03.341883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.757 [2024-07-11 21:22:03.341893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:28.757 [2024-07-11 21:22:03.341957] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:28.757 [2024-07-11 21:22:03.341979] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:28.757 [2024-07-11 21:22:03.342958] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:28.757 [2024-07-11 21:22:03.343045] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:28.757 [2024-07-11 21:22:03.343078] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:28.757 [2024-07-11 21:22:03.343971] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:28.757 [2024-07-11 21:22:03.343996] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:28.757 [2024-07-11 21:22:03.344046] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:28.757 [2024-07-11 21:22:03.345233] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:28.757 are Threshold: 0% 00:15:28.757 Life Percentage Used: 0% 00:15:28.757 Data Units Read: 0 00:15:28.757 Data Units Written: 0 00:15:28.757 Host Read Commands: 0 00:15:28.757 Host Write Commands: 0 00:15:28.757 Controller Busy Time: 0 minutes 00:15:28.757 Power Cycles: 0 00:15:28.757 Power On Hours: 0 hours 00:15:28.757 Unsafe Shutdowns: 0 00:15:28.758 Unrecoverable Media Errors: 0 00:15:28.758 Lifetime Error Log Entries: 0 00:15:28.758 Warning Temperature Time: 0 minutes 00:15:28.758 Critical Temperature Time: 0 minutes 00:15:28.758 00:15:28.758 Number of Queues 00:15:28.758 ================ 00:15:28.758 Number of I/O Submission Queues: 127 00:15:28.758 Number of I/O Completion Queues: 127 00:15:28.758 00:15:28.758 Active Namespaces 00:15:28.758 ================= 00:15:28.758 Namespace ID:1 00:15:28.758 Error Recovery Timeout: Unlimited 00:15:28.758 Command Set Identifier: NVM (00h) 00:15:28.758 Deallocate: Supported 00:15:28.758 Deallocated/Unwritten Error: Not Supported 00:15:28.758 Deallocated Read Value: Unknown 00:15:28.758 Deallocate in Write Zeroes: Not Supported 00:15:28.758 Deallocated Guard Field: 0xFFFF 00:15:28.758 Flush: Supported 00:15:28.758 Reservation: Supported 00:15:28.758 Namespace Sharing Capabilities: Multiple Controllers 00:15:28.758 Size (in LBAs): 131072 (0GiB) 00:15:28.758 Capacity (in LBAs): 131072 (0GiB) 00:15:28.758 Utilization (in LBAs): 131072 (0GiB) 00:15:28.758 NGUID: 00DFAE4C19134343B1C0A338883518BD 00:15:28.758 UUID: 00dfae4c-1913-4343-b1c0-a338883518bd 00:15:28.758 Thin Provisioning: Not Supported 00:15:28.758 Per-NS Atomic Units: Yes 00:15:28.758 Atomic Boundary Size (Normal): 0 00:15:28.758 Atomic Boundary Size (PFail): 0 00:15:28.758 Atomic Boundary Offset: 0 00:15:28.758 Maximum Single Source Range Length: 65535 00:15:28.758 Maximum Copy Length: 65535 00:15:28.758 Maximum Source Range Count: 1 00:15:28.758 NGUID/EUI64 Never Reused: No 00:15:28.758 Namespace Write Protected: No 00:15:28.758 Number of LBA Formats: 1 00:15:28.758 Current LBA Format: LBA Format #00 00:15:28.758 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:28.758 00:15:28.758 21:22:03 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:28.758 EAL: No free 2048 kB hugepages reported on node 1 00:15:29.015 [2024-07-11 21:22:03.571585] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:34.310 Initializing NVMe Controllers 00:15:34.310 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:34.310 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:34.310 Initialization complete. Launching workers. 00:15:34.310 ======================================================== 00:15:34.310 Latency(us) 00:15:34.310 Device Information : IOPS MiB/s Average min max 00:15:34.310 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 35013.68 136.77 3654.87 1169.21 7250.49 00:15:34.310 ======================================================== 00:15:34.310 Total : 35013.68 136.77 3654.87 1169.21 7250.49 00:15:34.310 00:15:34.310 [2024-07-11 21:22:08.675140] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:34.310 21:22:08 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:34.310 EAL: No free 2048 kB hugepages reported on node 1 00:15:34.310 [2024-07-11 21:22:08.917842] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:39.635 Initializing NVMe Controllers 00:15:39.635 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:39.635 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:39.635 Initialization complete. Launching workers. 00:15:39.635 ======================================================== 00:15:39.635 Latency(us) 00:15:39.635 Device Information : IOPS MiB/s Average min max 00:15:39.635 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31791.17 124.18 4026.36 1220.84 8244.44 00:15:39.635 ======================================================== 00:15:39.635 Total : 31791.17 124.18 4026.36 1220.84 8244.44 00:15:39.635 00:15:39.635 [2024-07-11 21:22:13.940379] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:39.635 21:22:13 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:39.635 EAL: No free 2048 kB hugepages reported on node 1 00:15:39.635 [2024-07-11 21:22:14.137176] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:44.912 [2024-07-11 21:22:19.270903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:44.912 Initializing NVMe Controllers 00:15:44.912 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.912 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:44.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:44.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:44.912 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:44.912 Initialization complete. Launching workers. 00:15:44.912 Starting thread on core 2 00:15:44.912 Starting thread on core 3 00:15:44.912 Starting thread on core 1 00:15:44.912 21:22:19 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:44.912 EAL: No free 2048 kB hugepages reported on node 1 00:15:44.912 [2024-07-11 21:22:19.579251] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.198 [2024-07-11 21:22:22.644148] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.198 Initializing NVMe Controllers 00:15:48.198 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.198 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:48.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:48.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:48.198 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:48.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:48.198 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:48.198 Initialization complete. Launching workers. 00:15:48.198 Starting thread on core 1 with urgent priority queue 00:15:48.198 Starting thread on core 2 with urgent priority queue 00:15:48.198 Starting thread on core 3 with urgent priority queue 00:15:48.198 Starting thread on core 0 with urgent priority queue 00:15:48.198 SPDK bdev Controller (SPDK2 ) core 0: 6160.67 IO/s 16.23 secs/100000 ios 00:15:48.198 SPDK bdev Controller (SPDK2 ) core 1: 6049.00 IO/s 16.53 secs/100000 ios 00:15:48.198 SPDK bdev Controller (SPDK2 ) core 2: 6855.67 IO/s 14.59 secs/100000 ios 00:15:48.198 SPDK bdev Controller (SPDK2 ) core 3: 5533.67 IO/s 18.07 secs/100000 ios 00:15:48.198 ======================================================== 00:15:48.198 00:15:48.198 21:22:22 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:48.198 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.198 [2024-07-11 21:22:22.953266] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:48.198 Initializing NVMe Controllers 00:15:48.198 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.198 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:48.198 Namespace ID: 1 size: 0GB 00:15:48.198 Initialization complete. 00:15:48.198 INFO: using host memory buffer for IO 00:15:48.198 Hello world! 00:15:48.198 [2024-07-11 21:22:22.962463] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:48.455 21:22:23 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:48.455 EAL: No free 2048 kB hugepages reported on node 1 00:15:48.714 [2024-07-11 21:22:23.241782] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:49.648 Initializing NVMe Controllers 00:15:49.648 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.648 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:49.648 Initialization complete. Launching workers. 00:15:49.648 submit (in ns) avg, min, max = 9486.7, 3496.7, 5996850.0 00:15:49.648 complete (in ns) avg, min, max = 27561.4, 2063.3, 4016934.4 00:15:49.648 00:15:49.648 Submit histogram 00:15:49.648 ================ 00:15:49.648 Range in us Cumulative Count 00:15:49.648 3.484 - 3.508: 0.0455% ( 6) 00:15:49.648 3.508 - 3.532: 0.4323% ( 51) 00:15:49.648 3.532 - 3.556: 1.4941% ( 140) 00:15:49.648 3.556 - 3.579: 4.3762% ( 380) 00:15:49.648 3.579 - 3.603: 9.4501% ( 669) 00:15:49.648 3.603 - 3.627: 17.0269% ( 999) 00:15:49.648 3.627 - 3.650: 26.6818% ( 1273) 00:15:49.648 3.650 - 3.674: 36.1168% ( 1244) 00:15:49.648 3.674 - 3.698: 44.1487% ( 1059) 00:15:49.648 3.698 - 3.721: 50.6333% ( 855) 00:15:49.648 3.721 - 3.745: 55.1157% ( 591) 00:15:49.648 3.745 - 3.769: 58.9534% ( 506) 00:15:49.648 3.769 - 3.793: 62.4877% ( 466) 00:15:49.648 3.793 - 3.816: 65.9386% ( 455) 00:15:49.648 3.816 - 3.840: 69.3819% ( 454) 00:15:49.648 3.840 - 3.864: 73.6519% ( 563) 00:15:49.648 3.864 - 3.887: 77.6792% ( 531) 00:15:49.648 3.887 - 3.911: 81.6686% ( 526) 00:15:49.648 3.911 - 3.935: 84.5885% ( 385) 00:15:49.648 3.935 - 3.959: 87.0459% ( 324) 00:15:49.648 3.959 - 3.982: 88.8586% ( 239) 00:15:49.648 3.982 - 4.006: 90.3527% ( 197) 00:15:49.648 4.006 - 4.030: 91.5662% ( 160) 00:15:49.648 4.030 - 4.053: 92.6962% ( 149) 00:15:49.648 4.053 - 4.077: 93.6367% ( 124) 00:15:49.648 4.077 - 4.101: 94.4482% ( 107) 00:15:49.648 4.101 - 4.124: 95.2446% ( 105) 00:15:49.648 4.124 - 4.148: 95.7527% ( 67) 00:15:49.648 4.148 - 4.172: 95.9954% ( 32) 00:15:49.648 4.172 - 4.196: 96.2988% ( 40) 00:15:49.648 4.196 - 4.219: 96.5567% ( 34) 00:15:49.648 4.219 - 4.243: 96.7842% ( 30) 00:15:49.648 4.243 - 4.267: 96.9207% ( 18) 00:15:49.648 4.267 - 4.290: 97.0800% ( 21) 00:15:49.648 4.290 - 4.314: 97.1786% ( 13) 00:15:49.648 4.314 - 4.338: 97.2469% ( 9) 00:15:49.648 4.338 - 4.361: 97.3455% ( 13) 00:15:49.648 4.361 - 4.385: 97.3834% ( 5) 00:15:49.648 4.385 - 4.409: 97.4289% ( 6) 00:15:49.648 4.409 - 4.433: 97.4744% ( 6) 00:15:49.648 4.433 - 4.456: 97.4972% ( 3) 00:15:49.648 4.456 - 4.480: 97.5199% ( 3) 00:15:49.648 4.480 - 4.504: 97.5275% ( 1) 00:15:49.648 4.504 - 4.527: 97.5502% ( 3) 00:15:49.648 4.527 - 4.551: 97.5654% ( 2) 00:15:49.648 4.551 - 4.575: 97.5730% ( 1) 00:15:49.648 4.575 - 4.599: 97.5958% ( 3) 00:15:49.648 4.599 - 4.622: 97.6185% ( 3) 00:15:49.648 4.622 - 4.646: 97.6413% ( 3) 00:15:49.648 4.646 - 4.670: 97.6640% ( 3) 00:15:49.648 4.693 - 4.717: 97.6716% ( 1) 00:15:49.648 4.717 - 4.741: 97.6868% ( 2) 00:15:49.648 4.741 - 4.764: 97.7171% ( 4) 00:15:49.648 4.764 - 4.788: 97.7474% ( 4) 00:15:49.648 4.788 - 4.812: 97.7854% ( 5) 00:15:49.648 4.812 - 4.836: 97.8536% ( 9) 00:15:49.648 4.836 - 4.859: 97.8764% ( 3) 00:15:49.648 4.859 - 4.883: 97.9370% ( 8) 00:15:49.648 4.883 - 4.907: 97.9674% ( 4) 00:15:49.648 4.907 - 4.930: 97.9977% ( 4) 00:15:49.648 4.930 - 4.954: 98.0356% ( 5) 00:15:49.648 4.954 - 4.978: 98.0812% ( 6) 00:15:49.648 4.978 - 5.001: 98.1039% ( 3) 00:15:49.648 5.001 - 5.025: 98.1191% ( 2) 00:15:49.648 5.025 - 5.049: 98.1494% ( 4) 00:15:49.648 5.049 - 5.073: 98.1873% ( 5) 00:15:49.648 5.073 - 5.096: 98.2025% ( 2) 00:15:49.648 5.096 - 5.120: 98.2101% ( 1) 00:15:49.648 5.120 - 5.144: 98.2632% ( 7) 00:15:49.648 5.144 - 5.167: 98.2783% ( 2) 00:15:49.648 5.167 - 5.191: 98.3011% ( 3) 00:15:49.648 5.215 - 5.239: 98.3239% ( 3) 00:15:49.648 5.239 - 5.262: 98.3314% ( 1) 00:15:49.648 5.262 - 5.286: 98.3466% ( 2) 00:15:49.648 5.286 - 5.310: 98.3618% ( 2) 00:15:49.648 5.310 - 5.333: 98.3694% ( 1) 00:15:49.648 5.428 - 5.452: 98.3845% ( 2) 00:15:49.648 5.452 - 5.476: 98.3997% ( 2) 00:15:49.648 5.476 - 5.499: 98.4073% ( 1) 00:15:49.648 5.499 - 5.523: 98.4149% ( 1) 00:15:49.648 5.523 - 5.547: 98.4224% ( 1) 00:15:49.648 5.665 - 5.689: 98.4376% ( 2) 00:15:49.649 5.713 - 5.736: 98.4452% ( 1) 00:15:49.649 5.831 - 5.855: 98.4528% ( 1) 00:15:49.649 6.068 - 6.116: 98.4604% ( 1) 00:15:49.649 6.210 - 6.258: 98.4680% ( 1) 00:15:49.649 6.258 - 6.305: 98.4755% ( 1) 00:15:49.649 6.447 - 6.495: 98.4831% ( 1) 00:15:49.649 6.495 - 6.542: 98.4907% ( 1) 00:15:49.649 6.921 - 6.969: 98.4983% ( 1) 00:15:49.649 7.016 - 7.064: 98.5059% ( 1) 00:15:49.649 7.064 - 7.111: 98.5210% ( 2) 00:15:49.649 7.159 - 7.206: 98.5362% ( 2) 00:15:49.649 7.348 - 7.396: 98.5438% ( 1) 00:15:49.649 7.443 - 7.490: 98.5514% ( 1) 00:15:49.649 7.490 - 7.538: 98.5590% ( 1) 00:15:49.649 7.633 - 7.680: 98.5666% ( 1) 00:15:49.649 7.870 - 7.917: 98.5741% ( 1) 00:15:49.649 7.964 - 8.012: 98.5817% ( 1) 00:15:49.649 8.059 - 8.107: 98.6121% ( 4) 00:15:49.649 8.154 - 8.201: 98.6196% ( 1) 00:15:49.649 8.201 - 8.249: 98.6424% ( 3) 00:15:49.649 8.439 - 8.486: 98.6500% ( 1) 00:15:49.649 8.486 - 8.533: 98.6576% ( 1) 00:15:49.649 8.628 - 8.676: 98.6651% ( 1) 00:15:49.649 8.770 - 8.818: 98.6727% ( 1) 00:15:49.649 8.865 - 8.913: 98.6803% ( 1) 00:15:49.649 8.913 - 8.960: 98.6879% ( 1) 00:15:49.649 9.007 - 9.055: 98.6955% ( 1) 00:15:49.649 9.055 - 9.102: 98.7031% ( 1) 00:15:49.649 9.102 - 9.150: 98.7107% ( 1) 00:15:49.649 9.150 - 9.197: 98.7182% ( 1) 00:15:49.649 9.197 - 9.244: 98.7258% ( 1) 00:15:49.649 9.244 - 9.292: 98.7334% ( 1) 00:15:49.649 9.292 - 9.339: 98.7410% ( 1) 00:15:49.649 9.529 - 9.576: 98.7562% ( 2) 00:15:49.649 9.719 - 9.766: 98.7637% ( 1) 00:15:49.649 9.861 - 9.908: 98.7713% ( 1) 00:15:49.649 9.956 - 10.003: 98.7789% ( 1) 00:15:49.649 10.050 - 10.098: 98.7865% ( 1) 00:15:49.649 10.098 - 10.145: 98.7941% ( 1) 00:15:49.649 10.430 - 10.477: 98.8017% ( 1) 00:15:49.649 10.714 - 10.761: 98.8093% ( 1) 00:15:49.649 10.904 - 10.951: 98.8168% ( 1) 00:15:49.649 11.093 - 11.141: 98.8320% ( 2) 00:15:49.649 11.141 - 11.188: 98.8396% ( 1) 00:15:49.649 11.283 - 11.330: 98.8472% ( 1) 00:15:49.649 11.378 - 11.425: 98.8548% ( 1) 00:15:49.649 11.473 - 11.520: 98.8623% ( 1) 00:15:49.649 11.567 - 11.615: 98.8775% ( 2) 00:15:49.649 12.089 - 12.136: 98.8851% ( 1) 00:15:49.649 12.421 - 12.516: 98.8927% ( 1) 00:15:49.649 12.516 - 12.610: 98.9003% ( 1) 00:15:49.649 12.705 - 12.800: 98.9078% ( 1) 00:15:49.649 12.800 - 12.895: 98.9154% ( 1) 00:15:49.649 13.084 - 13.179: 98.9230% ( 1) 00:15:49.649 13.274 - 13.369: 98.9306% ( 1) 00:15:49.649 13.653 - 13.748: 98.9382% ( 1) 00:15:49.649 13.938 - 14.033: 98.9534% ( 2) 00:15:49.649 14.033 - 14.127: 98.9609% ( 1) 00:15:49.649 14.127 - 14.222: 98.9685% ( 1) 00:15:49.649 14.507 - 14.601: 98.9761% ( 1) 00:15:49.649 14.601 - 14.696: 98.9837% ( 1) 00:15:49.649 14.696 - 14.791: 98.9989% ( 2) 00:15:49.649 15.076 - 15.170: 99.0064% ( 1) 00:15:49.649 15.265 - 15.360: 99.0140% ( 1) 00:15:49.649 17.161 - 17.256: 99.0216% ( 1) 00:15:49.649 17.256 - 17.351: 99.0368% ( 2) 00:15:49.649 17.351 - 17.446: 99.0444% ( 1) 00:15:49.649 17.446 - 17.541: 99.0520% ( 1) 00:15:49.649 17.541 - 17.636: 99.0747% ( 3) 00:15:49.649 17.636 - 17.730: 99.1126% ( 5) 00:15:49.649 17.730 - 17.825: 99.1202% ( 1) 00:15:49.649 17.825 - 17.920: 99.1657% ( 6) 00:15:49.649 17.920 - 18.015: 99.1961% ( 4) 00:15:49.649 18.015 - 18.110: 99.2643% ( 9) 00:15:49.649 18.110 - 18.204: 99.3022% ( 5) 00:15:49.649 18.204 - 18.299: 99.3705% ( 9) 00:15:49.649 18.299 - 18.394: 99.4312% ( 8) 00:15:49.649 18.394 - 18.489: 99.4918% ( 8) 00:15:49.649 18.489 - 18.584: 99.5374% ( 6) 00:15:49.649 18.584 - 18.679: 99.5677% ( 4) 00:15:49.649 18.679 - 18.773: 99.6132% ( 6) 00:15:49.649 18.773 - 18.868: 99.6511% ( 5) 00:15:49.649 18.868 - 18.963: 99.6739% ( 3) 00:15:49.649 18.963 - 19.058: 99.7345% ( 8) 00:15:49.649 19.153 - 19.247: 99.7421% ( 1) 00:15:49.649 19.247 - 19.342: 99.7649% ( 3) 00:15:49.649 19.342 - 19.437: 99.7801% ( 2) 00:15:49.649 19.437 - 19.532: 99.7876% ( 1) 00:15:49.649 19.721 - 19.816: 99.7952% ( 1) 00:15:49.649 20.006 - 20.101: 99.8028% ( 1) 00:15:49.649 20.101 - 20.196: 99.8104% ( 1) 00:15:49.649 20.764 - 20.859: 99.8180% ( 1) 00:15:49.649 21.997 - 22.092: 99.8256% ( 1) 00:15:49.649 22.187 - 22.281: 99.8331% ( 1) 00:15:49.649 23.419 - 23.514: 99.8407% ( 1) 00:15:49.649 23.609 - 23.704: 99.8483% ( 1) 00:15:49.649 27.876 - 28.065: 99.8559% ( 1) 00:15:49.649 28.255 - 28.444: 99.8635% ( 1) 00:15:49.649 3203.982 - 3228.255: 99.8711% ( 1) 00:15:49.649 3980.705 - 4004.978: 99.9621% ( 12) 00:15:49.649 4004.978 - 4029.250: 99.9924% ( 4) 00:15:49.649 5995.330 - 6019.603: 100.0000% ( 1) 00:15:49.649 00:15:49.649 Complete histogram 00:15:49.649 ================== 00:15:49.649 Range in us Cumulative Count 00:15:49.649 2.062 - 2.074: 12.0440% ( 1588) 00:15:49.649 2.074 - 2.086: 43.8301% ( 4191) 00:15:49.649 2.086 - 2.098: 46.0751% ( 296) 00:15:49.649 2.098 - 2.110: 54.0690% ( 1054) 00:15:49.649 2.110 - 2.121: 59.6511% ( 736) 00:15:49.649 2.121 - 2.133: 61.0694% ( 187) 00:15:49.649 2.133 - 2.145: 68.9041% ( 1033) 00:15:49.649 2.145 - 2.157: 75.2143% ( 832) 00:15:49.649 2.157 - 2.169: 76.0865% ( 115) 00:15:49.649 2.169 - 2.181: 79.1733% ( 407) 00:15:49.649 2.181 - 2.193: 81.1149% ( 256) 00:15:49.649 2.193 - 2.204: 81.6610% ( 72) 00:15:49.649 2.204 - 2.216: 84.7933% ( 413) 00:15:49.649 2.216 - 2.228: 88.6007% ( 502) 00:15:49.649 2.228 - 2.240: 90.2768% ( 221) 00:15:49.649 2.240 - 2.252: 92.1653% ( 249) 00:15:49.649 2.252 - 2.264: 93.3409% ( 155) 00:15:49.649 2.264 - 2.276: 93.5912% ( 33) 00:15:49.649 2.276 - 2.287: 93.9477% ( 47) 00:15:49.649 2.287 - 2.299: 94.4407% ( 65) 00:15:49.649 2.299 - 2.311: 94.9640% ( 69) 00:15:49.649 2.311 - 2.323: 95.2598% ( 39) 00:15:49.649 2.323 - 2.335: 95.3887% ( 17) 00:15:49.649 2.335 - 2.347: 95.4873% ( 13) 00:15:49.649 2.347 - 2.359: 95.6390% ( 20) 00:15:49.649 2.359 - 2.370: 95.8817% ( 32) 00:15:49.649 2.370 - 2.382: 96.2154% ( 44) 00:15:49.649 2.382 - 2.394: 96.7463% ( 70) 00:15:49.649 2.394 - 2.406: 97.0800% ( 44) 00:15:49.649 2.406 - 2.418: 97.2772% ( 26) 00:15:49.649 2.418 - 2.430: 97.4820% ( 27) 00:15:49.649 2.430 - 2.441: 97.6716% ( 25) 00:15:49.649 2.441 - 2.453: 97.7854% ( 15) 00:15:49.649 2.453 - 2.465: 97.9143% ( 17) 00:15:49.649 2.465 - 2.477: 98.0584% ( 19) 00:15:49.649 2.477 - 2.489: 98.1494% ( 12) 00:15:49.649 2.489 - 2.501: 98.2253% ( 10) 00:15:49.649 2.501 - 2.513: 98.2708% ( 6) 00:15:49.649 2.513 - 2.524: 98.3239% ( 7) 00:15:49.649 2.524 - 2.536: 98.3694% ( 6) 00:15:49.649 2.536 - 2.548: 98.3921% ( 3) 00:15:49.649 2.548 - 2.560: 98.4300% ( 5) 00:15:49.649 2.560 - 2.572: 98.4376% ( 1) 00:15:49.649 2.572 - 2.584: 98.4680% ( 4) 00:15:49.649 2.607 - 2.619: 98.4831% ( 2) 00:15:49.649 2.631 - 2.643: 98.4983% ( 2) 00:15:49.649 2.643 - 2.655: 98.5059% ( 1) 00:15:49.649 2.667 - 2.679: 98.5286% ( 3) 00:15:49.649 2.679 - 2.690: 98.5514% ( 3) 00:15:49.649 2.702 - 2.714: 98.5590% ( 1) 00:15:49.649 2.714 - 2.726: 98.5666% ( 1) 00:15:49.649 2.726 - 2.738: 98.5817% ( 2) 00:15:49.649 2.738 - 2.750: 98.5893% ( 1) 00:15:49.649 2.773 - 2.785: 98.5969% ( 1) 00:15:49.649 2.797 - 2.809: 98.6045% ( 1) 00:15:49.649 2.809 - 2.821: 98.6121% ( 1) 00:15:49.649 2.856 - 2.868: 98.6196% ( 1) 00:15:49.649 3.390 - 3.413: 98.6424% ( 3) 00:15:49.649 3.508 - 3.532: 98.6500% ( 1) 00:15:49.649 3.532 - 3.556: 98.6576% ( 1) 00:15:49.649 3.556 - 3.579: 98.6803% ( 3) 00:15:49.649 3.579 - 3.603: 98.6879% ( 1) 00:15:49.649 3.603 - 3.627: 98.6955% ( 1) 00:15:49.649 3.627 - 3.650: 98.7031% ( 1) 00:15:49.649 3.650 - 3.674: 98.7258% ( 3) 00:15:49.649 3.698 - 3.721: 98.7334% ( 1) 00:15:49.649 3.864 - 3.887: 98.7410% ( 1) 00:15:49.649 4.148 - 4.172: 9[2024-07-11 21:22:24.344622] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:49.649 8.7486% ( 1) 00:15:49.649 4.978 - 5.001: 98.7562% ( 1) 00:15:49.649 5.096 - 5.120: 98.7637% ( 1) 00:15:49.649 5.333 - 5.357: 98.7713% ( 1) 00:15:49.649 5.404 - 5.428: 98.7789% ( 1) 00:15:49.649 5.476 - 5.499: 98.7865% ( 1) 00:15:49.649 5.523 - 5.547: 98.7941% ( 1) 00:15:49.649 5.547 - 5.570: 98.8017% ( 1) 00:15:49.649 5.736 - 5.760: 98.8093% ( 1) 00:15:49.649 5.831 - 5.855: 98.8168% ( 1) 00:15:49.649 5.879 - 5.902: 98.8244% ( 1) 00:15:49.649 5.902 - 5.926: 98.8320% ( 1) 00:15:49.649 6.305 - 6.353: 98.8396% ( 1) 00:15:49.649 6.400 - 6.447: 98.8472% ( 1) 00:15:49.649 6.874 - 6.921: 98.8548% ( 1) 00:15:49.649 7.016 - 7.064: 98.8623% ( 1) 00:15:49.649 7.633 - 7.680: 98.8699% ( 1) 00:15:49.649 8.059 - 8.107: 98.8775% ( 1) 00:15:49.649 15.455 - 15.550: 98.8851% ( 1) 00:15:49.649 15.550 - 15.644: 98.8927% ( 1) 00:15:49.649 15.644 - 15.739: 98.9078% ( 2) 00:15:49.649 15.739 - 15.834: 98.9306% ( 3) 00:15:49.650 15.834 - 15.929: 98.9534% ( 3) 00:15:49.650 15.929 - 16.024: 98.9685% ( 2) 00:15:49.650 16.024 - 16.119: 98.9837% ( 2) 00:15:49.650 16.119 - 16.213: 98.9989% ( 2) 00:15:49.650 16.213 - 16.308: 99.0064% ( 1) 00:15:49.650 16.308 - 16.403: 99.0520% ( 6) 00:15:49.650 16.403 - 16.498: 99.0975% ( 6) 00:15:49.650 16.498 - 16.593: 99.1126% ( 2) 00:15:49.650 16.593 - 16.687: 99.1657% ( 7) 00:15:49.650 16.687 - 16.782: 99.1809% ( 2) 00:15:49.650 16.782 - 16.877: 99.2340% ( 7) 00:15:49.650 16.877 - 16.972: 99.2416% ( 1) 00:15:49.650 16.972 - 17.067: 99.2643% ( 3) 00:15:49.650 17.067 - 17.161: 99.2795% ( 2) 00:15:49.650 17.161 - 17.256: 99.2947% ( 2) 00:15:49.650 17.446 - 17.541: 99.3022% ( 1) 00:15:49.650 17.541 - 17.636: 99.3098% ( 1) 00:15:49.650 17.825 - 17.920: 99.3174% ( 1) 00:15:49.650 17.920 - 18.015: 99.3250% ( 1) 00:15:49.650 18.015 - 18.110: 99.3326% ( 1) 00:15:49.650 18.110 - 18.204: 99.3477% ( 2) 00:15:49.650 18.299 - 18.394: 99.3553% ( 1) 00:15:49.650 20.575 - 20.670: 99.3629% ( 1) 00:15:49.650 2014.625 - 2026.761: 99.3705% ( 1) 00:15:49.650 3980.705 - 4004.978: 99.7725% ( 53) 00:15:49.650 4004.978 - 4029.250: 100.0000% ( 30) 00:15:49.650 00:15:49.650 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:49.650 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:49.650 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:49.650 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:49.650 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:49.908 [ 00:15:49.908 { 00:15:49.908 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:49.908 "subtype": "Discovery", 00:15:49.908 "listen_addresses": [], 00:15:49.908 "allow_any_host": true, 00:15:49.908 "hosts": [] 00:15:49.908 }, 00:15:49.908 { 00:15:49.908 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:49.908 "subtype": "NVMe", 00:15:49.908 "listen_addresses": [ 00:15:49.908 { 00:15:49.908 "trtype": "VFIOUSER", 00:15:49.908 "adrfam": "IPv4", 00:15:49.908 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:49.908 "trsvcid": "0" 00:15:49.908 } 00:15:49.908 ], 00:15:49.908 "allow_any_host": true, 00:15:49.908 "hosts": [], 00:15:49.908 "serial_number": "SPDK1", 00:15:49.908 "model_number": "SPDK bdev Controller", 00:15:49.908 "max_namespaces": 32, 00:15:49.908 "min_cntlid": 1, 00:15:49.908 "max_cntlid": 65519, 00:15:49.908 "namespaces": [ 00:15:49.908 { 00:15:49.908 "nsid": 1, 00:15:49.908 "bdev_name": "Malloc1", 00:15:49.908 "name": "Malloc1", 00:15:49.908 "nguid": "323034520AB24C5F972C8E773A5985D8", 00:15:49.908 "uuid": "32303452-0ab2-4c5f-972c-8e773a5985d8" 00:15:49.908 }, 00:15:49.908 { 00:15:49.908 "nsid": 2, 00:15:49.908 "bdev_name": "Malloc3", 00:15:49.908 "name": "Malloc3", 00:15:49.908 "nguid": "FB4CD22099E84BB89AD2DA58C81B2D5D", 00:15:49.908 "uuid": "fb4cd220-99e8-4bb8-9ad2-da58c81b2d5d" 00:15:49.908 } 00:15:49.908 ] 00:15:49.908 }, 00:15:49.908 { 00:15:49.908 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:49.908 "subtype": "NVMe", 00:15:49.908 "listen_addresses": [ 00:15:49.908 { 00:15:49.908 "trtype": "VFIOUSER", 00:15:49.908 "adrfam": "IPv4", 00:15:49.908 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:49.908 "trsvcid": "0" 00:15:49.908 } 00:15:49.908 ], 00:15:49.908 "allow_any_host": true, 00:15:49.908 "hosts": [], 00:15:49.908 "serial_number": "SPDK2", 00:15:49.908 "model_number": "SPDK bdev Controller", 00:15:49.908 "max_namespaces": 32, 00:15:49.908 "min_cntlid": 1, 00:15:49.908 "max_cntlid": 65519, 00:15:49.908 "namespaces": [ 00:15:49.908 { 00:15:49.908 "nsid": 1, 00:15:49.908 "bdev_name": "Malloc2", 00:15:49.908 "name": "Malloc2", 00:15:49.908 "nguid": "00DFAE4C19134343B1C0A338883518BD", 00:15:49.908 "uuid": "00dfae4c-1913-4343-b1c0-a338883518bd" 00:15:49.908 } 00:15:49.908 ] 00:15:49.908 } 00:15:49.908 ] 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=872081 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:49.908 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:50.166 EAL: No free 2048 kB hugepages reported on node 1 00:15:50.166 [2024-07-11 21:22:24.788221] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:50.166 Malloc4 00:15:50.166 21:22:24 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:50.425 [2024-07-11 21:22:25.142775] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:50.425 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:50.425 Asynchronous Event Request test 00:15:50.425 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.425 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:50.425 Registering asynchronous event callbacks... 00:15:50.425 Starting namespace attribute notice tests for all controllers... 00:15:50.425 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:50.425 aer_cb - Changed Namespace 00:15:50.425 Cleaning up... 00:15:50.685 [ 00:15:50.685 { 00:15:50.685 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:50.685 "subtype": "Discovery", 00:15:50.685 "listen_addresses": [], 00:15:50.685 "allow_any_host": true, 00:15:50.685 "hosts": [] 00:15:50.685 }, 00:15:50.685 { 00:15:50.685 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:50.685 "subtype": "NVMe", 00:15:50.685 "listen_addresses": [ 00:15:50.685 { 00:15:50.685 "trtype": "VFIOUSER", 00:15:50.685 "adrfam": "IPv4", 00:15:50.685 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:50.685 "trsvcid": "0" 00:15:50.685 } 00:15:50.685 ], 00:15:50.685 "allow_any_host": true, 00:15:50.685 "hosts": [], 00:15:50.685 "serial_number": "SPDK1", 00:15:50.685 "model_number": "SPDK bdev Controller", 00:15:50.685 "max_namespaces": 32, 00:15:50.685 "min_cntlid": 1, 00:15:50.685 "max_cntlid": 65519, 00:15:50.685 "namespaces": [ 00:15:50.685 { 00:15:50.685 "nsid": 1, 00:15:50.685 "bdev_name": "Malloc1", 00:15:50.685 "name": "Malloc1", 00:15:50.685 "nguid": "323034520AB24C5F972C8E773A5985D8", 00:15:50.685 "uuid": "32303452-0ab2-4c5f-972c-8e773a5985d8" 00:15:50.685 }, 00:15:50.685 { 00:15:50.685 "nsid": 2, 00:15:50.685 "bdev_name": "Malloc3", 00:15:50.685 "name": "Malloc3", 00:15:50.685 "nguid": "FB4CD22099E84BB89AD2DA58C81B2D5D", 00:15:50.685 "uuid": "fb4cd220-99e8-4bb8-9ad2-da58c81b2d5d" 00:15:50.685 } 00:15:50.685 ] 00:15:50.685 }, 00:15:50.685 { 00:15:50.685 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:50.685 "subtype": "NVMe", 00:15:50.685 "listen_addresses": [ 00:15:50.685 { 00:15:50.685 "trtype": "VFIOUSER", 00:15:50.685 "adrfam": "IPv4", 00:15:50.685 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:50.685 "trsvcid": "0" 00:15:50.685 } 00:15:50.685 ], 00:15:50.685 "allow_any_host": true, 00:15:50.685 "hosts": [], 00:15:50.685 "serial_number": "SPDK2", 00:15:50.685 "model_number": "SPDK bdev Controller", 00:15:50.685 "max_namespaces": 32, 00:15:50.685 "min_cntlid": 1, 00:15:50.685 "max_cntlid": 65519, 00:15:50.685 "namespaces": [ 00:15:50.685 { 00:15:50.685 "nsid": 1, 00:15:50.685 "bdev_name": "Malloc2", 00:15:50.685 "name": "Malloc2", 00:15:50.685 "nguid": "00DFAE4C19134343B1C0A338883518BD", 00:15:50.685 "uuid": "00dfae4c-1913-4343-b1c0-a338883518bd" 00:15:50.685 }, 00:15:50.685 { 00:15:50.685 "nsid": 2, 00:15:50.685 "bdev_name": "Malloc4", 00:15:50.685 "name": "Malloc4", 00:15:50.685 "nguid": "8AF3FC4BAE90445FBE9B7A10A6A69873", 00:15:50.685 "uuid": "8af3fc4b-ae90-445f-be9b-7a10a6a69873" 00:15:50.685 } 00:15:50.685 ] 00:15:50.685 } 00:15:50.685 ] 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 872081 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 866563 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 866563 ']' 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 866563 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 866563 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:50.685 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 866563' 00:15:50.685 killing process with pid 866563 00:15:50.686 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 866563 00:15:50.686 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 866563 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=872221 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 872221' 00:15:51.254 Process pid: 872221 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 872221 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 872221 ']' 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:51.254 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:51.254 21:22:25 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:51.254 [2024-07-11 21:22:25.830040] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:51.254 [2024-07-11 21:22:25.831097] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:51.254 [2024-07-11 21:22:25.831164] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:51.254 EAL: No free 2048 kB hugepages reported on node 1 00:15:51.254 [2024-07-11 21:22:25.895474] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:51.254 [2024-07-11 21:22:25.983982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:51.254 [2024-07-11 21:22:25.984078] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:51.254 [2024-07-11 21:22:25.984093] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:51.254 [2024-07-11 21:22:25.984105] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:51.254 [2024-07-11 21:22:25.984139] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:51.254 [2024-07-11 21:22:25.984361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:51.254 [2024-07-11 21:22:25.984421] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:51.254 [2024-07-11 21:22:25.984397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:51.254 [2024-07-11 21:22:25.984423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:51.512 [2024-07-11 21:22:26.082454] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:51.512 [2024-07-11 21:22:26.082693] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:51.512 [2024-07-11 21:22:26.082985] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:51.512 [2024-07-11 21:22:26.083593] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:51.512 [2024-07-11 21:22:26.083836] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:51.512 21:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:51.512 21:22:26 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:51.512 21:22:26 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:52.449 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:52.706 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:52.706 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:52.706 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:52.706 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:52.706 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:52.964 Malloc1 00:15:52.965 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:53.222 21:22:27 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:53.479 21:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:53.737 21:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:53.737 21:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:53.737 21:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:53.994 Malloc2 00:15:53.994 21:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:54.252 21:22:28 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:54.509 21:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 872221 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 872221 ']' 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 872221 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872221 00:15:54.768 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:54.769 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:54.769 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872221' 00:15:54.769 killing process with pid 872221 00:15:54.769 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 872221 00:15:54.769 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 872221 00:15:55.027 21:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:55.027 21:22:29 nvmf_tcp.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:55.027 00:15:55.027 real 0m52.382s 00:15:55.027 user 3m26.973s 00:15:55.027 sys 0m4.259s 00:15:55.027 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:55.027 21:22:29 nvmf_tcp.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:55.027 ************************************ 00:15:55.027 END TEST nvmf_vfio_user 00:15:55.027 ************************************ 00:15:55.027 21:22:29 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:15:55.027 21:22:29 nvmf_tcp -- nvmf/nvmf.sh@42 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:55.027 21:22:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:55.027 21:22:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:55.027 21:22:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:15:55.027 ************************************ 00:15:55.027 START TEST nvmf_vfio_user_nvme_compliance 00:15:55.027 ************************************ 00:15:55.027 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:55.285 * Looking for test storage... 00:15:55.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:55.285 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=872818 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 872818' 00:15:55.286 Process pid: 872818 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 872818 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 872818 ']' 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:55.286 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:55.286 21:22:29 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:55.286 [2024-07-11 21:22:29.892193] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:15:55.286 [2024-07-11 21:22:29.892289] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:55.286 EAL: No free 2048 kB hugepages reported on node 1 00:15:55.286 [2024-07-11 21:22:29.961094] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:55.545 [2024-07-11 21:22:30.056559] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:55.545 [2024-07-11 21:22:30.056632] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:55.545 [2024-07-11 21:22:30.056650] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:55.545 [2024-07-11 21:22:30.056664] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:55.545 [2024-07-11 21:22:30.056683] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:55.545 [2024-07-11 21:22:30.056776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:55.545 [2024-07-11 21:22:30.056807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:55.545 [2024-07-11 21:22:30.056811] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.545 21:22:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:55.545 21:22:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:55.545 21:22:30 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 malloc0 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:56.520 21:22:31 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:56.781 EAL: No free 2048 kB hugepages reported on node 1 00:15:56.781 00:15:56.781 00:15:56.781 CUnit - A unit testing framework for C - Version 2.1-3 00:15:56.781 http://cunit.sourceforge.net/ 00:15:56.781 00:15:56.781 00:15:56.781 Suite: nvme_compliance 00:15:56.781 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-11 21:22:31.409210] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.781 [2024-07-11 21:22:31.410679] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:56.781 [2024-07-11 21:22:31.410703] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:56.781 [2024-07-11 21:22:31.410731] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:56.781 [2024-07-11 21:22:31.412236] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.781 passed 00:15:56.781 Test: admin_identify_ctrlr_verify_fused ...[2024-07-11 21:22:31.498855] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:56.781 [2024-07-11 21:22:31.501877] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:56.781 passed 00:15:57.040 Test: admin_identify_ns ...[2024-07-11 21:22:31.587448] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.040 [2024-07-11 21:22:31.647788] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:57.040 [2024-07-11 21:22:31.655788] ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:57.040 [2024-07-11 21:22:31.676903] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.040 passed 00:15:57.040 Test: admin_get_features_mandatory_features ...[2024-07-11 21:22:31.763192] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.040 [2024-07-11 21:22:31.766213] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.040 passed 00:15:57.298 Test: admin_get_features_optional_features ...[2024-07-11 21:22:31.848767] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.298 [2024-07-11 21:22:31.851805] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.298 passed 00:15:57.298 Test: admin_set_features_number_of_queues ...[2024-07-11 21:22:31.936999] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.298 [2024-07-11 21:22:32.041873] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.556 passed 00:15:57.556 Test: admin_get_log_page_mandatory_logs ...[2024-07-11 21:22:32.124640] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.556 [2024-07-11 21:22:32.127662] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.556 passed 00:15:57.556 Test: admin_get_log_page_with_lpo ...[2024-07-11 21:22:32.212882] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.556 [2024-07-11 21:22:32.281785] ctrlr.c:2677:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:57.556 [2024-07-11 21:22:32.294844] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.556 passed 00:15:57.816 Test: fabric_property_get ...[2024-07-11 21:22:32.378673] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.816 [2024-07-11 21:22:32.379977] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:57.816 [2024-07-11 21:22:32.381701] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.816 passed 00:15:57.816 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-11 21:22:32.463259] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:57.816 [2024-07-11 21:22:32.464529] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:57.816 [2024-07-11 21:22:32.466280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:57.816 passed 00:15:57.816 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-11 21:22:32.550305] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.076 [2024-07-11 21:22:32.633780] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:58.076 [2024-07-11 21:22:32.649761] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:58.076 [2024-07-11 21:22:32.654870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.076 passed 00:15:58.076 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-11 21:22:32.737078] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.076 [2024-07-11 21:22:32.738383] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:58.076 [2024-07-11 21:22:32.742107] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.076 passed 00:15:58.076 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-11 21:22:32.823341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.334 [2024-07-11 21:22:32.901764] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:58.334 [2024-07-11 21:22:32.925780] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:58.334 [2024-07-11 21:22:32.930881] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.334 passed 00:15:58.334 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-11 21:22:33.014544] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.334 [2024-07-11 21:22:33.015877] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:58.334 [2024-07-11 21:22:33.015933] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:58.334 [2024-07-11 21:22:33.017567] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.334 passed 00:15:58.334 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-11 21:22:33.097784] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.591 [2024-07-11 21:22:33.190776] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:58.591 [2024-07-11 21:22:33.198767] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:58.591 [2024-07-11 21:22:33.206779] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:58.591 [2024-07-11 21:22:33.214777] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:58.591 [2024-07-11 21:22:33.242866] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.591 passed 00:15:58.591 Test: admin_create_io_sq_verify_pc ...[2024-07-11 21:22:33.326536] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:58.591 [2024-07-11 21:22:33.342778] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:58.591 [2024-07-11 21:22:33.359870] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:58.848 passed 00:15:58.848 Test: admin_create_io_qp_max_qps ...[2024-07-11 21:22:33.442437] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:59.785 [2024-07-11 21:22:34.545784] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:16:00.353 [2024-07-11 21:22:34.921369] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.353 passed 00:16:00.353 Test: admin_create_io_sq_shared_cq ...[2024-07-11 21:22:35.007715] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:16:00.611 [2024-07-11 21:22:35.136762] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:16:00.611 [2024-07-11 21:22:35.173845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:16:00.611 passed 00:16:00.611 00:16:00.611 Run Summary: Type Total Ran Passed Failed Inactive 00:16:00.611 suites 1 1 n/a 0 0 00:16:00.611 tests 18 18 18 0 0 00:16:00.611 asserts 360 360 360 0 n/a 00:16:00.611 00:16:00.611 Elapsed time = 1.560 seconds 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 872818 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 872818 ']' 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 872818 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 872818 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 872818' 00:16:00.611 killing process with pid 872818 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 872818 00:16:00.611 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 872818 00:16:00.869 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:16:00.869 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:16:00.869 00:16:00.869 real 0m5.709s 00:16:00.869 user 0m16.090s 00:16:00.869 sys 0m0.550s 00:16:00.869 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:00.869 21:22:35 nvmf_tcp.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:16:00.869 ************************************ 00:16:00.869 END TEST nvmf_vfio_user_nvme_compliance 00:16:00.869 ************************************ 00:16:00.869 21:22:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:00.869 21:22:35 nvmf_tcp -- nvmf/nvmf.sh@43 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:00.869 21:22:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:00.869 21:22:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:00.869 21:22:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:00.869 ************************************ 00:16:00.869 START TEST nvmf_vfio_user_fuzz 00:16:00.869 ************************************ 00:16:00.869 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:16:00.869 * Looking for test storage... 00:16:00.869 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=873544 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 873544' 00:16:00.870 Process pid: 873544 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 873544 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 873544 ']' 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:00.870 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:00.870 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:01.439 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:01.439 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:16:01.439 21:22:35 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.374 malloc0 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:02.374 21:22:36 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:02.374 21:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:02.374 21:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:16:02.374 21:22:37 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:34.447 Fuzzing completed. Shutting down the fuzz application 00:16:34.447 00:16:34.447 Dumping successful admin opcodes: 00:16:34.447 8, 9, 10, 24, 00:16:34.447 Dumping successful io opcodes: 00:16:34.447 0, 00:16:34.447 NS: 0x200003a1ef00 I/O qp, Total commands completed: 586977, total successful commands: 2260, random_seed: 2040946816 00:16:34.447 NS: 0x200003a1ef00 admin qp, Total commands completed: 74930, total successful commands: 585, random_seed: 1098838016 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 873544 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 873544 ']' 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 873544 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 873544 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 873544' 00:16:34.447 killing process with pid 873544 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 873544 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 873544 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:34.447 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:34.448 00:16:34.448 real 0m32.211s 00:16:34.448 user 0m31.290s 00:16:34.448 sys 0m28.773s 00:16:34.448 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:34.448 21:23:07 nvmf_tcp.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:34.448 ************************************ 00:16:34.448 END TEST nvmf_vfio_user_fuzz 00:16:34.448 ************************************ 00:16:34.448 21:23:07 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:34.448 21:23:07 nvmf_tcp -- nvmf/nvmf.sh@47 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:34.448 21:23:07 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:34.448 21:23:07 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:34.448 21:23:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:34.448 ************************************ 00:16:34.448 START TEST nvmf_host_management 00:16:34.448 ************************************ 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:16:34.448 * Looking for test storage... 00:16:34.448 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:16:34.448 21:23:07 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:35.383 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.383 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:35.384 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:35.384 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:35.384 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:35.384 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:35.384 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:16:35.384 00:16:35.384 --- 10.0.0.2 ping statistics --- 00:16:35.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.384 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:35.384 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:35.384 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.136 ms 00:16:35.384 00:16:35.384 --- 10.0.0.1 ping statistics --- 00:16:35.384 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:35.384 rtt min/avg/max/mdev = 0.136/0.136/0.136/0.000 ms 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.384 21:23:09 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=878979 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 878979 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 878979 ']' 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:35.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.384 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.384 [2024-07-11 21:23:10.048631] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:35.384 [2024-07-11 21:23:10.048715] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:35.384 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.384 [2024-07-11 21:23:10.116184] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:35.644 [2024-07-11 21:23:10.208686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:35.644 [2024-07-11 21:23:10.208751] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:35.644 [2024-07-11 21:23:10.208792] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:35.644 [2024-07-11 21:23:10.208807] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:35.644 [2024-07-11 21:23:10.208819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:35.644 [2024-07-11 21:23:10.208916] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:35.644 [2024-07-11 21:23:10.209069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:16:35.644 [2024-07-11 21:23:10.209137] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:16:35.644 [2024-07-11 21:23:10.209140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.644 [2024-07-11 21:23:10.362567] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.644 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.644 Malloc0 00:16:35.902 [2024-07-11 21:23:10.423717] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=879026 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 879026 /var/tmp/bdevperf.sock 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 879026 ']' 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:16:35.902 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:35.902 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:35.902 { 00:16:35.902 "params": { 00:16:35.902 "name": "Nvme$subsystem", 00:16:35.902 "trtype": "$TEST_TRANSPORT", 00:16:35.902 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:35.903 "adrfam": "ipv4", 00:16:35.903 "trsvcid": "$NVMF_PORT", 00:16:35.903 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:35.903 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:35.903 "hdgst": ${hdgst:-false}, 00:16:35.903 "ddgst": ${ddgst:-false} 00:16:35.903 }, 00:16:35.903 "method": "bdev_nvme_attach_controller" 00:16:35.903 } 00:16:35.903 EOF 00:16:35.903 )") 00:16:35.903 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:35.903 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:35.903 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:35.903 21:23:10 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:35.903 "params": { 00:16:35.903 "name": "Nvme0", 00:16:35.903 "trtype": "tcp", 00:16:35.903 "traddr": "10.0.0.2", 00:16:35.903 "adrfam": "ipv4", 00:16:35.903 "trsvcid": "4420", 00:16:35.903 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:35.903 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:35.903 "hdgst": false, 00:16:35.903 "ddgst": false 00:16:35.903 }, 00:16:35.903 "method": "bdev_nvme_attach_controller" 00:16:35.903 }' 00:16:35.903 [2024-07-11 21:23:10.504360] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:35.903 [2024-07-11 21:23:10.504450] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879026 ] 00:16:35.903 EAL: No free 2048 kB hugepages reported on node 1 00:16:35.903 [2024-07-11 21:23:10.568055] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:35.903 [2024-07-11 21:23:10.654427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:36.160 Running I/O for 10 seconds... 00:16:36.160 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:36.160 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:16:36.160 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:16:36.160 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.160 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:16:36.417 21:23:10 nvmf_tcp.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@60 -- # break 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.678 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:36.678 [2024-07-11 21:23:11.262703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.678 [2024-07-11 21:23:11.262767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.678 [2024-07-11 21:23:11.262823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.678 [2024-07-11 21:23:11.262841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.678 [2024-07-11 21:23:11.262858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.262872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.262888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.262903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.262918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.262932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.262948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.262962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.262979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.262995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.679 [2024-07-11 21:23:11.263823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.679 [2024-07-11 21:23:11.263841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.263857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.263874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.263890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.263908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.263924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.263941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.263956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.263974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.263990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.264944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:16:36.680 [2024-07-11 21:23:11.264960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:16:36.680 [2024-07-11 21:23:11.265072] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1f2c100 was disconnected and freed. reset controller. 00:16:36.680 [2024-07-11 21:23:11.266223] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:16:36.681 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.681 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:16:36.681 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.681 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:36.681 task offset: 75136 on job bdev=Nvme0n1 fails 00:16:36.681 00:16:36.681 Latency(us) 00:16:36.681 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:36.681 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:36.681 Job: Nvme0n1 ended in about 0.39 seconds with error 00:16:36.681 Verification LBA range: start 0x0 length 0x400 00:16:36.681 Nvme0n1 : 0.39 1494.31 93.39 166.03 0.00 37403.44 2924.85 34564.17 00:16:36.681 =================================================================================================================== 00:16:36.681 Total : 1494.31 93.39 166.03 0.00 37403.44 2924.85 34564.17 00:16:36.681 [2024-07-11 21:23:11.268125] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:36.681 [2024-07-11 21:23:11.268167] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b1aed0 (9): Bad file descriptor 00:16:36.681 21:23:11 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.681 21:23:11 nvmf_tcp.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:16:36.681 [2024-07-11 21:23:11.319223] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:16:37.617 21:23:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 879026 00:16:37.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (879026) - No such process 00:16:37.617 21:23:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@91 -- # true 00:16:37.617 21:23:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:16:37.617 21:23:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:16:37.617 21:23:12 nvmf_tcp.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:16:37.618 { 00:16:37.618 "params": { 00:16:37.618 "name": "Nvme$subsystem", 00:16:37.618 "trtype": "$TEST_TRANSPORT", 00:16:37.618 "traddr": "$NVMF_FIRST_TARGET_IP", 00:16:37.618 "adrfam": "ipv4", 00:16:37.618 "trsvcid": "$NVMF_PORT", 00:16:37.618 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:16:37.618 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:16:37.618 "hdgst": ${hdgst:-false}, 00:16:37.618 "ddgst": ${ddgst:-false} 00:16:37.618 }, 00:16:37.618 "method": "bdev_nvme_attach_controller" 00:16:37.618 } 00:16:37.618 EOF 00:16:37.618 )") 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:16:37.618 21:23:12 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:16:37.618 "params": { 00:16:37.618 "name": "Nvme0", 00:16:37.618 "trtype": "tcp", 00:16:37.618 "traddr": "10.0.0.2", 00:16:37.618 "adrfam": "ipv4", 00:16:37.618 "trsvcid": "4420", 00:16:37.618 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:16:37.618 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:16:37.618 "hdgst": false, 00:16:37.618 "ddgst": false 00:16:37.618 }, 00:16:37.618 "method": "bdev_nvme_attach_controller" 00:16:37.618 }' 00:16:37.618 [2024-07-11 21:23:12.320269] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:37.618 [2024-07-11 21:23:12.320362] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid879301 ] 00:16:37.618 EAL: No free 2048 kB hugepages reported on node 1 00:16:37.618 [2024-07-11 21:23:12.381787] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:37.875 [2024-07-11 21:23:12.466313] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:37.875 Running I/O for 1 seconds... 00:16:39.308 00:16:39.308 Latency(us) 00:16:39.308 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:39.308 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:16:39.308 Verification LBA range: start 0x0 length 0x400 00:16:39.308 Nvme0n1 : 1.01 1653.38 103.34 0.00 0.00 37981.11 6990.51 37476.88 00:16:39.308 =================================================================================================================== 00:16:39.308 Total : 1653.38 103.34 0.00 0.00 37981.11 6990.51 37476.88 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:39.308 rmmod nvme_tcp 00:16:39.308 rmmod nvme_fabrics 00:16:39.308 rmmod nvme_keyring 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 878979 ']' 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 878979 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 878979 ']' 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 878979 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 878979 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 878979' 00:16:39.308 killing process with pid 878979 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 878979 00:16:39.308 21:23:13 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 878979 00:16:39.567 [2024-07-11 21:23:14.144035] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:39.567 21:23:14 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.474 21:23:16 nvmf_tcp.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:16:41.474 21:23:16 nvmf_tcp.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:16:41.474 00:16:41.474 real 0m8.421s 00:16:41.474 user 0m18.183s 00:16:41.474 sys 0m2.737s 00:16:41.474 21:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:41.474 21:23:16 nvmf_tcp.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:16:41.474 ************************************ 00:16:41.474 END TEST nvmf_host_management 00:16:41.474 ************************************ 00:16:41.474 21:23:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:16:41.474 21:23:16 nvmf_tcp -- nvmf/nvmf.sh@48 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:41.474 21:23:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:41.474 21:23:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:41.474 21:23:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:16:41.732 ************************************ 00:16:41.732 START TEST nvmf_lvol 00:16:41.732 ************************************ 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:16:41.732 * Looking for test storage... 00:16:41.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:41.732 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:16:41.733 21:23:16 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:16:43.635 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:16:43.635 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:16:43.635 Found net devices under 0000:0a:00.0: cvl_0_0 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.635 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:16:43.636 Found net devices under 0000:0a:00.1: cvl_0_1 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:43.636 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:43.636 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:16:43.636 00:16:43.636 --- 10.0.0.2 ping statistics --- 00:16:43.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.636 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:43.636 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:43.636 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:16:43.636 00:16:43.636 --- 10.0.0.1 ping statistics --- 00:16:43.636 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:43.636 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:43.636 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=881378 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 881378 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 881378 ']' 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:43.896 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:43.896 [2024-07-11 21:23:18.460789] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:16:43.896 [2024-07-11 21:23:18.460888] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:43.896 EAL: No free 2048 kB hugepages reported on node 1 00:16:43.896 [2024-07-11 21:23:18.525067] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:16:43.896 [2024-07-11 21:23:18.608372] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:43.896 [2024-07-11 21:23:18.608429] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:43.896 [2024-07-11 21:23:18.608453] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:43.896 [2024-07-11 21:23:18.608464] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:43.896 [2024-07-11 21:23:18.608474] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:43.896 [2024-07-11 21:23:18.608574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.896 [2024-07-11 21:23:18.608637] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:16:43.896 [2024-07-11 21:23:18.608640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.155 21:23:18 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:16:44.413 [2024-07-11 21:23:18.987313] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.413 21:23:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.672 21:23:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:16:44.672 21:23:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:16:44.929 21:23:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:16:44.929 21:23:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:16:45.188 21:23:19 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:16:45.446 21:23:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e176f004-6215-4c32-8533-98fcf3931d00 00:16:45.446 21:23:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e176f004-6215-4c32-8533-98fcf3931d00 lvol 20 00:16:45.704 21:23:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=1eeb1e2d-4f3f-4636-8dcc-4958c903b7e1 00:16:45.704 21:23:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:16:45.962 21:23:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1eeb1e2d-4f3f-4636-8dcc-4958c903b7e1 00:16:46.220 21:23:20 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:16:46.477 [2024-07-11 21:23:21.073838] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:46.477 21:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:16:46.734 21:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=881796 00:16:46.734 21:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:16:46.734 21:23:21 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:16:46.734 EAL: No free 2048 kB hugepages reported on node 1 00:16:47.666 21:23:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 1eeb1e2d-4f3f-4636-8dcc-4958c903b7e1 MY_SNAPSHOT 00:16:47.924 21:23:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=09fb31ba-9653-4800-a933-863f8056b4dc 00:16:47.924 21:23:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 1eeb1e2d-4f3f-4636-8dcc-4958c903b7e1 30 00:16:48.182 21:23:22 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 09fb31ba-9653-4800-a933-863f8056b4dc MY_CLONE 00:16:48.747 21:23:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1c975a63-dd9b-4ec2-8544-a94d0786072b 00:16:48.747 21:23:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1c975a63-dd9b-4ec2-8544-a94d0786072b 00:16:49.316 21:23:23 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 881796 00:16:57.424 Initializing NVMe Controllers 00:16:57.424 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:16:57.424 Controller IO queue size 128, less than required. 00:16:57.424 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:16:57.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:16:57.424 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:16:57.424 Initialization complete. Launching workers. 00:16:57.424 ======================================================== 00:16:57.424 Latency(us) 00:16:57.424 Device Information : IOPS MiB/s Average min max 00:16:57.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10752.50 42.00 11911.71 1494.88 79247.52 00:16:57.424 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 9865.90 38.54 12978.91 5494.64 85549.72 00:16:57.424 ======================================================== 00:16:57.424 Total : 20618.40 80.54 12422.37 1494.88 85549.72 00:16:57.424 00:16:57.424 21:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:16:57.424 21:23:31 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1eeb1e2d-4f3f-4636-8dcc-4958c903b7e1 00:16:57.683 21:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e176f004-6215-4c32-8533-98fcf3931d00 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:16:57.941 rmmod nvme_tcp 00:16:57.941 rmmod nvme_fabrics 00:16:57.941 rmmod nvme_keyring 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 881378 ']' 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 881378 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 881378 ']' 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 881378 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 881378 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 881378' 00:16:57.941 killing process with pid 881378 00:16:57.941 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 881378 00:16:57.942 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 881378 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:16:58.201 21:23:32 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.776 21:23:34 nvmf_tcp.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:00.776 00:17:00.776 real 0m18.692s 00:17:00.776 user 1m2.898s 00:17:00.776 sys 0m6.142s 00:17:00.776 21:23:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:00.776 21:23:34 nvmf_tcp.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:17:00.776 ************************************ 00:17:00.776 END TEST nvmf_lvol 00:17:00.776 ************************************ 00:17:00.776 21:23:34 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:00.776 21:23:34 nvmf_tcp -- nvmf/nvmf.sh@49 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:00.776 21:23:34 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:00.776 21:23:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:00.776 21:23:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:00.776 ************************************ 00:17:00.776 START TEST nvmf_lvs_grow 00:17:00.776 ************************************ 00:17:00.776 21:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:17:00.776 * Looking for test storage... 00:17:00.776 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:00.776 21:23:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:00.776 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:17:00.776 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:17:00.777 21:23:35 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:02.681 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:02.682 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:02.682 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:02.682 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:02.682 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:02.682 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:02.682 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:17:02.682 00:17:02.682 --- 10.0.0.2 ping statistics --- 00:17:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.682 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:02.682 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:02.682 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.250 ms 00:17:02.682 00:17:02.682 --- 10.0.0.1 ping statistics --- 00:17:02.682 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:02.682 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=885055 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 885055 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 885055 ']' 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:02.682 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:02.682 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.682 [2024-07-11 21:23:37.233294] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:02.682 [2024-07-11 21:23:37.233397] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:02.682 EAL: No free 2048 kB hugepages reported on node 1 00:17:02.682 [2024-07-11 21:23:37.298890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:02.682 [2024-07-11 21:23:37.385731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:02.682 [2024-07-11 21:23:37.385794] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:02.682 [2024-07-11 21:23:37.385824] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:02.682 [2024-07-11 21:23:37.385836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:02.682 [2024-07-11 21:23:37.385846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:02.682 [2024-07-11 21:23:37.385877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:02.940 21:23:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:03.198 [2024-07-11 21:23:37.795625] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:03.198 ************************************ 00:17:03.198 START TEST lvs_grow_clean 00:17:03.198 ************************************ 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:03.198 21:23:37 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:03.455 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:03.455 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:03.713 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=651665ca-57bd-4d92-a02f-eef07c04a665 00:17:03.713 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:03.713 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:03.972 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:03.972 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:03.972 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 651665ca-57bd-4d92-a02f-eef07c04a665 lvol 150 00:17:04.231 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=5eaebe9d-232e-4706-91ee-2c80d12ca1a5 00:17:04.231 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:04.231 21:23:38 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:04.491 [2024-07-11 21:23:39.151939] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:04.491 [2024-07-11 21:23:39.152030] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:04.491 true 00:17:04.491 21:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:04.491 21:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:04.750 21:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:04.751 21:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:05.009 21:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 5eaebe9d-232e-4706-91ee-2c80d12ca1a5 00:17:05.267 21:23:39 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:05.525 [2024-07-11 21:23:40.150975] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:05.525 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=885490 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 885490 /var/tmp/bdevperf.sock 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 885490 ']' 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:05.812 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:05.812 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:05.812 [2024-07-11 21:23:40.452330] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:05.812 [2024-07-11 21:23:40.452414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid885490 ] 00:17:05.812 EAL: No free 2048 kB hugepages reported on node 1 00:17:05.812 [2024-07-11 21:23:40.514849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:06.070 [2024-07-11 21:23:40.606375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:06.070 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:06.070 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:17:06.070 21:23:40 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:06.328 Nvme0n1 00:17:06.328 21:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:06.586 [ 00:17:06.586 { 00:17:06.586 "name": "Nvme0n1", 00:17:06.586 "aliases": [ 00:17:06.586 "5eaebe9d-232e-4706-91ee-2c80d12ca1a5" 00:17:06.586 ], 00:17:06.586 "product_name": "NVMe disk", 00:17:06.586 "block_size": 4096, 00:17:06.586 "num_blocks": 38912, 00:17:06.586 "uuid": "5eaebe9d-232e-4706-91ee-2c80d12ca1a5", 00:17:06.586 "assigned_rate_limits": { 00:17:06.586 "rw_ios_per_sec": 0, 00:17:06.586 "rw_mbytes_per_sec": 0, 00:17:06.586 "r_mbytes_per_sec": 0, 00:17:06.586 "w_mbytes_per_sec": 0 00:17:06.586 }, 00:17:06.586 "claimed": false, 00:17:06.586 "zoned": false, 00:17:06.586 "supported_io_types": { 00:17:06.586 "read": true, 00:17:06.586 "write": true, 00:17:06.586 "unmap": true, 00:17:06.586 "flush": true, 00:17:06.586 "reset": true, 00:17:06.586 "nvme_admin": true, 00:17:06.586 "nvme_io": true, 00:17:06.586 "nvme_io_md": false, 00:17:06.586 "write_zeroes": true, 00:17:06.586 "zcopy": false, 00:17:06.586 "get_zone_info": false, 00:17:06.586 "zone_management": false, 00:17:06.586 "zone_append": false, 00:17:06.586 "compare": true, 00:17:06.586 "compare_and_write": true, 00:17:06.586 "abort": true, 00:17:06.586 "seek_hole": false, 00:17:06.586 "seek_data": false, 00:17:06.586 "copy": true, 00:17:06.586 "nvme_iov_md": false 00:17:06.586 }, 00:17:06.586 "memory_domains": [ 00:17:06.586 { 00:17:06.586 "dma_device_id": "system", 00:17:06.586 "dma_device_type": 1 00:17:06.586 } 00:17:06.586 ], 00:17:06.586 "driver_specific": { 00:17:06.586 "nvme": [ 00:17:06.586 { 00:17:06.586 "trid": { 00:17:06.586 "trtype": "TCP", 00:17:06.586 "adrfam": "IPv4", 00:17:06.586 "traddr": "10.0.0.2", 00:17:06.586 "trsvcid": "4420", 00:17:06.586 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:06.586 }, 00:17:06.586 "ctrlr_data": { 00:17:06.586 "cntlid": 1, 00:17:06.586 "vendor_id": "0x8086", 00:17:06.586 "model_number": "SPDK bdev Controller", 00:17:06.586 "serial_number": "SPDK0", 00:17:06.586 "firmware_revision": "24.09", 00:17:06.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:06.586 "oacs": { 00:17:06.586 "security": 0, 00:17:06.586 "format": 0, 00:17:06.586 "firmware": 0, 00:17:06.586 "ns_manage": 0 00:17:06.586 }, 00:17:06.586 "multi_ctrlr": true, 00:17:06.586 "ana_reporting": false 00:17:06.586 }, 00:17:06.586 "vs": { 00:17:06.586 "nvme_version": "1.3" 00:17:06.586 }, 00:17:06.586 "ns_data": { 00:17:06.586 "id": 1, 00:17:06.586 "can_share": true 00:17:06.586 } 00:17:06.586 } 00:17:06.586 ], 00:17:06.586 "mp_policy": "active_passive" 00:17:06.586 } 00:17:06.586 } 00:17:06.586 ] 00:17:06.586 21:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=885541 00:17:06.586 21:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:06.586 21:23:41 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:06.843 Running I/O for 10 seconds... 00:17:07.782 Latency(us) 00:17:07.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:07.782 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:07.782 Nvme0n1 : 1.00 14606.00 57.05 0.00 0.00 0.00 0.00 0.00 00:17:07.782 =================================================================================================================== 00:17:07.782 Total : 14606.00 57.05 0.00 0.00 0.00 0.00 0.00 00:17:07.782 00:17:08.720 21:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:08.720 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:08.720 Nvme0n1 : 2.00 14669.00 57.30 0.00 0.00 0.00 0.00 0.00 00:17:08.720 =================================================================================================================== 00:17:08.720 Total : 14669.00 57.30 0.00 0.00 0.00 0.00 0.00 00:17:08.720 00:17:08.978 true 00:17:08.978 21:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:08.978 21:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:09.238 21:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:09.238 21:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:09.238 21:23:43 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 885541 00:17:09.806 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:09.806 Nvme0n1 : 3.00 14817.00 57.88 0.00 0.00 0.00 0.00 0.00 00:17:09.806 =================================================================================================================== 00:17:09.806 Total : 14817.00 57.88 0.00 0.00 0.00 0.00 0.00 00:17:09.806 00:17:10.745 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:10.745 Nvme0n1 : 4.00 14844.50 57.99 0.00 0.00 0.00 0.00 0.00 00:17:10.745 =================================================================================================================== 00:17:10.745 Total : 14844.50 57.99 0.00 0.00 0.00 0.00 0.00 00:17:10.745 00:17:11.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:11.679 Nvme0n1 : 5.00 14936.40 58.35 0.00 0.00 0.00 0.00 0.00 00:17:11.679 =================================================================================================================== 00:17:11.680 Total : 14936.40 58.35 0.00 0.00 0.00 0.00 0.00 00:17:11.680 00:17:13.056 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.056 Nvme0n1 : 6.00 15018.67 58.67 0.00 0.00 0.00 0.00 0.00 00:17:13.056 =================================================================================================================== 00:17:13.056 Total : 15018.67 58.67 0.00 0.00 0.00 0.00 0.00 00:17:13.056 00:17:13.995 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:13.995 Nvme0n1 : 7.00 15032.14 58.72 0.00 0.00 0.00 0.00 0.00 00:17:13.995 =================================================================================================================== 00:17:13.995 Total : 15032.14 58.72 0.00 0.00 0.00 0.00 0.00 00:17:13.995 00:17:14.932 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:14.932 Nvme0n1 : 8.00 15074.00 58.88 0.00 0.00 0.00 0.00 0.00 00:17:14.932 =================================================================================================================== 00:17:14.932 Total : 15074.00 58.88 0.00 0.00 0.00 0.00 0.00 00:17:14.932 00:17:15.870 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:15.870 Nvme0n1 : 9.00 15106.56 59.01 0.00 0.00 0.00 0.00 0.00 00:17:15.870 =================================================================================================================== 00:17:15.870 Total : 15106.56 59.01 0.00 0.00 0.00 0.00 0.00 00:17:15.870 00:17:16.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.843 Nvme0n1 : 10.00 15115.40 59.04 0.00 0.00 0.00 0.00 0.00 00:17:16.843 =================================================================================================================== 00:17:16.843 Total : 15115.40 59.04 0.00 0.00 0.00 0.00 0.00 00:17:16.843 00:17:16.843 00:17:16.843 Latency(us) 00:17:16.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.843 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:16.843 Nvme0n1 : 10.00 15119.76 59.06 0.00 0.00 8460.65 2220.94 16602.45 00:17:16.843 =================================================================================================================== 00:17:16.843 Total : 15119.76 59.06 0.00 0.00 8460.65 2220.94 16602.45 00:17:16.843 0 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 885490 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 885490 ']' 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 885490 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 885490 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 885490' 00:17:16.843 killing process with pid 885490 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 885490 00:17:16.843 Received shutdown signal, test time was about 10.000000 seconds 00:17:16.843 00:17:16.843 Latency(us) 00:17:16.843 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:16.843 =================================================================================================================== 00:17:16.843 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:16.843 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 885490 00:17:17.101 21:23:51 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:17.360 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:17.618 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:17.618 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:17.876 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:17.876 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:17:17.876 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:18.134 [2024-07-11 21:23:52.804531] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:18.134 21:23:52 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:18.392 request: 00:17:18.392 { 00:17:18.392 "uuid": "651665ca-57bd-4d92-a02f-eef07c04a665", 00:17:18.392 "method": "bdev_lvol_get_lvstores", 00:17:18.392 "req_id": 1 00:17:18.392 } 00:17:18.392 Got JSON-RPC error response 00:17:18.392 response: 00:17:18.392 { 00:17:18.392 "code": -19, 00:17:18.392 "message": "No such device" 00:17:18.392 } 00:17:18.392 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:17:18.392 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:18.392 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:18.392 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:18.392 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:18.649 aio_bdev 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 5eaebe9d-232e-4706-91ee-2c80d12ca1a5 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=5eaebe9d-232e-4706-91ee-2c80d12ca1a5 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:18.649 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:18.906 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 5eaebe9d-232e-4706-91ee-2c80d12ca1a5 -t 2000 00:17:19.163 [ 00:17:19.163 { 00:17:19.163 "name": "5eaebe9d-232e-4706-91ee-2c80d12ca1a5", 00:17:19.163 "aliases": [ 00:17:19.163 "lvs/lvol" 00:17:19.163 ], 00:17:19.163 "product_name": "Logical Volume", 00:17:19.163 "block_size": 4096, 00:17:19.163 "num_blocks": 38912, 00:17:19.163 "uuid": "5eaebe9d-232e-4706-91ee-2c80d12ca1a5", 00:17:19.163 "assigned_rate_limits": { 00:17:19.163 "rw_ios_per_sec": 0, 00:17:19.163 "rw_mbytes_per_sec": 0, 00:17:19.163 "r_mbytes_per_sec": 0, 00:17:19.163 "w_mbytes_per_sec": 0 00:17:19.163 }, 00:17:19.163 "claimed": false, 00:17:19.163 "zoned": false, 00:17:19.163 "supported_io_types": { 00:17:19.163 "read": true, 00:17:19.163 "write": true, 00:17:19.163 "unmap": true, 00:17:19.163 "flush": false, 00:17:19.163 "reset": true, 00:17:19.163 "nvme_admin": false, 00:17:19.163 "nvme_io": false, 00:17:19.163 "nvme_io_md": false, 00:17:19.163 "write_zeroes": true, 00:17:19.163 "zcopy": false, 00:17:19.163 "get_zone_info": false, 00:17:19.163 "zone_management": false, 00:17:19.163 "zone_append": false, 00:17:19.163 "compare": false, 00:17:19.163 "compare_and_write": false, 00:17:19.163 "abort": false, 00:17:19.163 "seek_hole": true, 00:17:19.163 "seek_data": true, 00:17:19.163 "copy": false, 00:17:19.163 "nvme_iov_md": false 00:17:19.163 }, 00:17:19.163 "driver_specific": { 00:17:19.163 "lvol": { 00:17:19.163 "lvol_store_uuid": "651665ca-57bd-4d92-a02f-eef07c04a665", 00:17:19.163 "base_bdev": "aio_bdev", 00:17:19.163 "thin_provision": false, 00:17:19.163 "num_allocated_clusters": 38, 00:17:19.163 "snapshot": false, 00:17:19.163 "clone": false, 00:17:19.163 "esnap_clone": false 00:17:19.163 } 00:17:19.163 } 00:17:19.163 } 00:17:19.163 ] 00:17:19.163 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:17:19.163 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:19.163 21:23:53 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:19.421 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:19.421 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:19.421 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:19.678 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:19.678 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 5eaebe9d-232e-4706-91ee-2c80d12ca1a5 00:17:19.936 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 651665ca-57bd-4d92-a02f-eef07c04a665 00:17:20.193 21:23:54 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:20.452 00:17:20.452 real 0m17.242s 00:17:20.452 user 0m16.801s 00:17:20.452 sys 0m1.873s 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:17:20.452 ************************************ 00:17:20.452 END TEST lvs_grow_clean 00:17:20.452 ************************************ 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:20.452 ************************************ 00:17:20.452 START TEST lvs_grow_dirty 00:17:20.452 ************************************ 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:20.452 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:20.712 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:17:20.712 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:17:20.972 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:20.972 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:20.972 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:17:21.230 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:17:21.230 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:17:21.230 21:23:55 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 lvol 150 00:17:21.488 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:21.488 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:21.488 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:17:21.747 [2024-07-11 21:23:56.384938] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:17:21.747 [2024-07-11 21:23:56.385014] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:17:21.747 true 00:17:21.747 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:21.747 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:17:22.007 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:17:22.007 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:17:22.265 21:23:56 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:22.523 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:17:22.782 [2024-07-11 21:23:57.375956] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:22.782 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=887536 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 887536 /var/tmp/bdevperf.sock 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 887536 ']' 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:23.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:23.040 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:23.040 [2024-07-11 21:23:57.727494] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:23.040 [2024-07-11 21:23:57.727574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid887536 ] 00:17:23.040 EAL: No free 2048 kB hugepages reported on node 1 00:17:23.040 [2024-07-11 21:23:57.790440] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:23.299 [2024-07-11 21:23:57.883345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:23.299 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:23.299 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:23.299 21:23:57 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:17:23.557 Nvme0n1 00:17:23.817 21:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:17:23.817 [ 00:17:23.817 { 00:17:23.817 "name": "Nvme0n1", 00:17:23.817 "aliases": [ 00:17:23.817 "edc1bcce-d3b6-4668-9633-e219a34bf6cb" 00:17:23.817 ], 00:17:23.817 "product_name": "NVMe disk", 00:17:23.817 "block_size": 4096, 00:17:23.817 "num_blocks": 38912, 00:17:23.817 "uuid": "edc1bcce-d3b6-4668-9633-e219a34bf6cb", 00:17:23.817 "assigned_rate_limits": { 00:17:23.817 "rw_ios_per_sec": 0, 00:17:23.817 "rw_mbytes_per_sec": 0, 00:17:23.817 "r_mbytes_per_sec": 0, 00:17:23.817 "w_mbytes_per_sec": 0 00:17:23.817 }, 00:17:23.817 "claimed": false, 00:17:23.817 "zoned": false, 00:17:23.817 "supported_io_types": { 00:17:23.817 "read": true, 00:17:23.817 "write": true, 00:17:23.817 "unmap": true, 00:17:23.817 "flush": true, 00:17:23.817 "reset": true, 00:17:23.817 "nvme_admin": true, 00:17:23.817 "nvme_io": true, 00:17:23.817 "nvme_io_md": false, 00:17:23.817 "write_zeroes": true, 00:17:23.817 "zcopy": false, 00:17:23.817 "get_zone_info": false, 00:17:23.817 "zone_management": false, 00:17:23.817 "zone_append": false, 00:17:23.817 "compare": true, 00:17:23.817 "compare_and_write": true, 00:17:23.817 "abort": true, 00:17:23.817 "seek_hole": false, 00:17:23.817 "seek_data": false, 00:17:23.817 "copy": true, 00:17:23.817 "nvme_iov_md": false 00:17:23.817 }, 00:17:23.817 "memory_domains": [ 00:17:23.817 { 00:17:23.817 "dma_device_id": "system", 00:17:23.817 "dma_device_type": 1 00:17:23.817 } 00:17:23.817 ], 00:17:23.817 "driver_specific": { 00:17:23.817 "nvme": [ 00:17:23.817 { 00:17:23.817 "trid": { 00:17:23.817 "trtype": "TCP", 00:17:23.817 "adrfam": "IPv4", 00:17:23.817 "traddr": "10.0.0.2", 00:17:23.817 "trsvcid": "4420", 00:17:23.817 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:17:23.817 }, 00:17:23.817 "ctrlr_data": { 00:17:23.817 "cntlid": 1, 00:17:23.817 "vendor_id": "0x8086", 00:17:23.817 "model_number": "SPDK bdev Controller", 00:17:23.817 "serial_number": "SPDK0", 00:17:23.817 "firmware_revision": "24.09", 00:17:23.817 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:17:23.817 "oacs": { 00:17:23.817 "security": 0, 00:17:23.817 "format": 0, 00:17:23.817 "firmware": 0, 00:17:23.817 "ns_manage": 0 00:17:23.817 }, 00:17:23.817 "multi_ctrlr": true, 00:17:23.817 "ana_reporting": false 00:17:23.817 }, 00:17:23.817 "vs": { 00:17:23.817 "nvme_version": "1.3" 00:17:23.817 }, 00:17:23.817 "ns_data": { 00:17:23.817 "id": 1, 00:17:23.817 "can_share": true 00:17:23.817 } 00:17:23.817 } 00:17:23.817 ], 00:17:23.817 "mp_policy": "active_passive" 00:17:23.817 } 00:17:23.817 } 00:17:23.817 ] 00:17:24.076 21:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=887670 00:17:24.076 21:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:17:24.076 21:23:58 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:24.076 Running I/O for 10 seconds... 00:17:25.011 Latency(us) 00:17:25.011 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:25.011 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.011 Nvme0n1 : 1.00 14035.00 54.82 0.00 0.00 0.00 0.00 0.00 00:17:25.011 =================================================================================================================== 00:17:25.011 Total : 14035.00 54.82 0.00 0.00 0.00 0.00 0.00 00:17:25.011 00:17:25.946 21:24:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:25.946 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:25.946 Nvme0n1 : 2.00 14264.50 55.72 0.00 0.00 0.00 0.00 0.00 00:17:25.946 =================================================================================================================== 00:17:25.946 Total : 14264.50 55.72 0.00 0.00 0.00 0.00 0.00 00:17:25.946 00:17:26.203 true 00:17:26.203 21:24:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:26.203 21:24:00 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:17:26.461 21:24:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:17:26.461 21:24:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:17:26.461 21:24:01 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 887670 00:17:27.026 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.026 Nvme0n1 : 3.00 14299.33 55.86 0.00 0.00 0.00 0.00 0.00 00:17:27.026 =================================================================================================================== 00:17:27.026 Total : 14299.33 55.86 0.00 0.00 0.00 0.00 0.00 00:17:27.026 00:17:27.962 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:27.962 Nvme0n1 : 4.00 14376.50 56.16 0.00 0.00 0.00 0.00 0.00 00:17:27.962 =================================================================================================================== 00:17:27.962 Total : 14376.50 56.16 0.00 0.00 0.00 0.00 0.00 00:17:27.962 00:17:29.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:29.339 Nvme0n1 : 5.00 14486.00 56.59 0.00 0.00 0.00 0.00 0.00 00:17:29.339 =================================================================================================================== 00:17:29.339 Total : 14486.00 56.59 0.00 0.00 0.00 0.00 0.00 00:17:29.339 00:17:30.274 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:30.274 Nvme0n1 : 6.00 14537.67 56.79 0.00 0.00 0.00 0.00 0.00 00:17:30.274 =================================================================================================================== 00:17:30.274 Total : 14537.67 56.79 0.00 0.00 0.00 0.00 0.00 00:17:30.274 00:17:31.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:31.211 Nvme0n1 : 7.00 14610.71 57.07 0.00 0.00 0.00 0.00 0.00 00:17:31.211 =================================================================================================================== 00:17:31.211 Total : 14610.71 57.07 0.00 0.00 0.00 0.00 0.00 00:17:31.211 00:17:32.182 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:32.182 Nvme0n1 : 8.00 14692.38 57.39 0.00 0.00 0.00 0.00 0.00 00:17:32.182 =================================================================================================================== 00:17:32.182 Total : 14692.38 57.39 0.00 0.00 0.00 0.00 0.00 00:17:32.182 00:17:33.122 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:33.122 Nvme0n1 : 9.00 14782.11 57.74 0.00 0.00 0.00 0.00 0.00 00:17:33.122 =================================================================================================================== 00:17:33.122 Total : 14782.11 57.74 0.00 0.00 0.00 0.00 0.00 00:17:33.122 00:17:34.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.059 Nvme0n1 : 10.00 14792.20 57.78 0.00 0.00 0.00 0.00 0.00 00:17:34.059 =================================================================================================================== 00:17:34.059 Total : 14792.20 57.78 0.00 0.00 0.00 0.00 0.00 00:17:34.059 00:17:34.059 00:17:34.059 Latency(us) 00:17:34.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.059 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:17:34.059 Nvme0n1 : 10.01 14795.56 57.80 0.00 0.00 8646.53 5072.97 17087.91 00:17:34.059 =================================================================================================================== 00:17:34.059 Total : 14795.56 57.80 0.00 0.00 8646.53 5072.97 17087.91 00:17:34.059 0 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 887536 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 887536 ']' 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 887536 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 887536 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 887536' 00:17:34.059 killing process with pid 887536 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 887536 00:17:34.059 Received shutdown signal, test time was about 10.000000 seconds 00:17:34.059 00:17:34.059 Latency(us) 00:17:34.059 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:34.059 =================================================================================================================== 00:17:34.059 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:34.059 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 887536 00:17:34.318 21:24:08 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:17:34.576 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:17:34.835 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:34.835 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 885055 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 885055 00:17:35.094 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 885055 Killed "${NVMF_APP[@]}" "$@" 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=889611 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 889611 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 889611 ']' 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:35.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:35.094 21:24:09 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:35.094 [2024-07-11 21:24:09.856801] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:35.094 [2024-07-11 21:24:09.856879] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:35.354 EAL: No free 2048 kB hugepages reported on node 1 00:17:35.354 [2024-07-11 21:24:09.921220] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:35.354 [2024-07-11 21:24:10.008208] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:35.354 [2024-07-11 21:24:10.008265] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:35.354 [2024-07-11 21:24:10.008280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:35.354 [2024-07-11 21:24:10.008291] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:35.354 [2024-07-11 21:24:10.008302] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:35.354 [2024-07-11 21:24:10.008328] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:35.354 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:35.354 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:17:35.354 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:35.354 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:35.354 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:35.613 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:35.613 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:35.871 [2024-07-11 21:24:10.426482] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:17:35.871 [2024-07-11 21:24:10.426690] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:17:35.871 [2024-07-11 21:24:10.426763] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:35.871 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:36.129 21:24:10 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b edc1bcce-d3b6-4668-9633-e219a34bf6cb -t 2000 00:17:36.387 [ 00:17:36.387 { 00:17:36.387 "name": "edc1bcce-d3b6-4668-9633-e219a34bf6cb", 00:17:36.387 "aliases": [ 00:17:36.387 "lvs/lvol" 00:17:36.387 ], 00:17:36.387 "product_name": "Logical Volume", 00:17:36.387 "block_size": 4096, 00:17:36.387 "num_blocks": 38912, 00:17:36.387 "uuid": "edc1bcce-d3b6-4668-9633-e219a34bf6cb", 00:17:36.387 "assigned_rate_limits": { 00:17:36.387 "rw_ios_per_sec": 0, 00:17:36.387 "rw_mbytes_per_sec": 0, 00:17:36.387 "r_mbytes_per_sec": 0, 00:17:36.387 "w_mbytes_per_sec": 0 00:17:36.387 }, 00:17:36.387 "claimed": false, 00:17:36.387 "zoned": false, 00:17:36.387 "supported_io_types": { 00:17:36.387 "read": true, 00:17:36.387 "write": true, 00:17:36.387 "unmap": true, 00:17:36.387 "flush": false, 00:17:36.387 "reset": true, 00:17:36.387 "nvme_admin": false, 00:17:36.387 "nvme_io": false, 00:17:36.387 "nvme_io_md": false, 00:17:36.387 "write_zeroes": true, 00:17:36.387 "zcopy": false, 00:17:36.387 "get_zone_info": false, 00:17:36.387 "zone_management": false, 00:17:36.387 "zone_append": false, 00:17:36.387 "compare": false, 00:17:36.387 "compare_and_write": false, 00:17:36.387 "abort": false, 00:17:36.387 "seek_hole": true, 00:17:36.387 "seek_data": true, 00:17:36.387 "copy": false, 00:17:36.387 "nvme_iov_md": false 00:17:36.387 }, 00:17:36.387 "driver_specific": { 00:17:36.387 "lvol": { 00:17:36.387 "lvol_store_uuid": "5bcff242-a598-4bf1-9a2b-09b6376f71b9", 00:17:36.388 "base_bdev": "aio_bdev", 00:17:36.388 "thin_provision": false, 00:17:36.388 "num_allocated_clusters": 38, 00:17:36.388 "snapshot": false, 00:17:36.388 "clone": false, 00:17:36.388 "esnap_clone": false 00:17:36.388 } 00:17:36.388 } 00:17:36.388 } 00:17:36.388 ] 00:17:36.388 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:36.388 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:36.388 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:17:36.647 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:17:36.647 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:36.647 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:17:36.906 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:17:36.906 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:37.163 [2024-07-11 21:24:11.816030] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:37.163 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:17:37.164 21:24:11 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:37.421 request: 00:17:37.421 { 00:17:37.421 "uuid": "5bcff242-a598-4bf1-9a2b-09b6376f71b9", 00:17:37.421 "method": "bdev_lvol_get_lvstores", 00:17:37.421 "req_id": 1 00:17:37.421 } 00:17:37.421 Got JSON-RPC error response 00:17:37.421 response: 00:17:37.421 { 00:17:37.421 "code": -19, 00:17:37.421 "message": "No such device" 00:17:37.421 } 00:17:37.421 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:17:37.421 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:37.421 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:37.421 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:37.421 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:17:37.679 aio_bdev 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:37.679 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:37.939 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b edc1bcce-d3b6-4668-9633-e219a34bf6cb -t 2000 00:17:38.198 [ 00:17:38.198 { 00:17:38.198 "name": "edc1bcce-d3b6-4668-9633-e219a34bf6cb", 00:17:38.198 "aliases": [ 00:17:38.198 "lvs/lvol" 00:17:38.198 ], 00:17:38.198 "product_name": "Logical Volume", 00:17:38.198 "block_size": 4096, 00:17:38.198 "num_blocks": 38912, 00:17:38.198 "uuid": "edc1bcce-d3b6-4668-9633-e219a34bf6cb", 00:17:38.198 "assigned_rate_limits": { 00:17:38.198 "rw_ios_per_sec": 0, 00:17:38.198 "rw_mbytes_per_sec": 0, 00:17:38.198 "r_mbytes_per_sec": 0, 00:17:38.198 "w_mbytes_per_sec": 0 00:17:38.198 }, 00:17:38.198 "claimed": false, 00:17:38.198 "zoned": false, 00:17:38.198 "supported_io_types": { 00:17:38.198 "read": true, 00:17:38.198 "write": true, 00:17:38.198 "unmap": true, 00:17:38.198 "flush": false, 00:17:38.198 "reset": true, 00:17:38.198 "nvme_admin": false, 00:17:38.198 "nvme_io": false, 00:17:38.198 "nvme_io_md": false, 00:17:38.198 "write_zeroes": true, 00:17:38.198 "zcopy": false, 00:17:38.198 "get_zone_info": false, 00:17:38.198 "zone_management": false, 00:17:38.198 "zone_append": false, 00:17:38.198 "compare": false, 00:17:38.198 "compare_and_write": false, 00:17:38.198 "abort": false, 00:17:38.198 "seek_hole": true, 00:17:38.198 "seek_data": true, 00:17:38.198 "copy": false, 00:17:38.198 "nvme_iov_md": false 00:17:38.198 }, 00:17:38.198 "driver_specific": { 00:17:38.198 "lvol": { 00:17:38.198 "lvol_store_uuid": "5bcff242-a598-4bf1-9a2b-09b6376f71b9", 00:17:38.198 "base_bdev": "aio_bdev", 00:17:38.198 "thin_provision": false, 00:17:38.198 "num_allocated_clusters": 38, 00:17:38.198 "snapshot": false, 00:17:38.198 "clone": false, 00:17:38.198 "esnap_clone": false 00:17:38.198 } 00:17:38.198 } 00:17:38.198 } 00:17:38.198 ] 00:17:38.198 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:17:38.198 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:38.198 21:24:12 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:17:38.457 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:17:38.457 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:38.457 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:17:38.715 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:17:38.715 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete edc1bcce-d3b6-4668-9633-e219a34bf6cb 00:17:38.973 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5bcff242-a598-4bf1-9a2b-09b6376f71b9 00:17:39.231 21:24:13 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:17:39.488 00:17:39.488 real 0m19.070s 00:17:39.488 user 0m47.978s 00:17:39.488 sys 0m4.638s 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:17:39.488 ************************************ 00:17:39.488 END TEST lvs_grow_dirty 00:17:39.488 ************************************ 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:17:39.488 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:17:39.488 nvmf_trace.0 00:17:39.746 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:39.747 rmmod nvme_tcp 00:17:39.747 rmmod nvme_fabrics 00:17:39.747 rmmod nvme_keyring 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 889611 ']' 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 889611 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 889611 ']' 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 889611 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 889611 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 889611' 00:17:39.747 killing process with pid 889611 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 889611 00:17:39.747 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 889611 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:40.006 21:24:14 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.907 21:24:16 nvmf_tcp.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:41.907 00:17:41.907 real 0m41.627s 00:17:41.907 user 1m10.632s 00:17:41.907 sys 0m8.381s 00:17:41.907 21:24:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:41.907 21:24:16 nvmf_tcp.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:17:41.907 ************************************ 00:17:41.907 END TEST nvmf_lvs_grow 00:17:41.907 ************************************ 00:17:41.907 21:24:16 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:41.907 21:24:16 nvmf_tcp -- nvmf/nvmf.sh@50 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:41.907 21:24:16 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:41.907 21:24:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:41.907 21:24:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:41.907 ************************************ 00:17:41.907 START TEST nvmf_bdev_io_wait 00:17:41.907 ************************************ 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:17:42.165 * Looking for test storage... 00:17:42.165 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:17:42.165 21:24:16 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:44.069 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:44.069 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:44.069 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:44.069 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:44.069 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:44.070 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:44.328 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:44.329 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:44.329 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:17:44.329 00:17:44.329 --- 10.0.0.2 ping statistics --- 00:17:44.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.329 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:44.329 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:44.329 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.178 ms 00:17:44.329 00:17:44.329 --- 10.0.0.1 ping statistics --- 00:17:44.329 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:44.329 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=892133 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 892133 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 892133 ']' 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:44.329 21:24:18 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.329 [2024-07-11 21:24:18.967457] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:44.329 [2024-07-11 21:24:18.967546] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.329 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.329 [2024-07-11 21:24:19.031569] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:44.588 [2024-07-11 21:24:19.122146] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.588 [2024-07-11 21:24:19.122202] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.588 [2024-07-11 21:24:19.122230] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.588 [2024-07-11 21:24:19.122241] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.588 [2024-07-11 21:24:19.122251] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.588 [2024-07-11 21:24:19.122541] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.588 [2024-07-11 21:24:19.122610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.588 [2024-07-11 21:24:19.122670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:44.588 [2024-07-11 21:24:19.122672] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 [2024-07-11 21:24:19.279457] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 Malloc0 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:44.588 [2024-07-11 21:24:19.340505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=892168 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=892169 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=892172 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:44.588 { 00:17:44.588 "params": { 00:17:44.588 "name": "Nvme$subsystem", 00:17:44.588 "trtype": "$TEST_TRANSPORT", 00:17:44.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.588 "adrfam": "ipv4", 00:17:44.588 "trsvcid": "$NVMF_PORT", 00:17:44.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.588 "hdgst": ${hdgst:-false}, 00:17:44.588 "ddgst": ${ddgst:-false} 00:17:44.588 }, 00:17:44.588 "method": "bdev_nvme_attach_controller" 00:17:44.588 } 00:17:44.588 EOF 00:17:44.588 )") 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:44.588 { 00:17:44.588 "params": { 00:17:44.588 "name": "Nvme$subsystem", 00:17:44.588 "trtype": "$TEST_TRANSPORT", 00:17:44.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.588 "adrfam": "ipv4", 00:17:44.588 "trsvcid": "$NVMF_PORT", 00:17:44.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.588 "hdgst": ${hdgst:-false}, 00:17:44.588 "ddgst": ${ddgst:-false} 00:17:44.588 }, 00:17:44.588 "method": "bdev_nvme_attach_controller" 00:17:44.588 } 00:17:44.588 EOF 00:17:44.588 )") 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=892174 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:44.588 { 00:17:44.588 "params": { 00:17:44.588 "name": "Nvme$subsystem", 00:17:44.588 "trtype": "$TEST_TRANSPORT", 00:17:44.588 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.588 "adrfam": "ipv4", 00:17:44.588 "trsvcid": "$NVMF_PORT", 00:17:44.588 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.588 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.588 "hdgst": ${hdgst:-false}, 00:17:44.588 "ddgst": ${ddgst:-false} 00:17:44.588 }, 00:17:44.588 "method": "bdev_nvme_attach_controller" 00:17:44.588 } 00:17:44.588 EOF 00:17:44.588 )") 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:17:44.588 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:44.589 { 00:17:44.589 "params": { 00:17:44.589 "name": "Nvme$subsystem", 00:17:44.589 "trtype": "$TEST_TRANSPORT", 00:17:44.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:44.589 "adrfam": "ipv4", 00:17:44.589 "trsvcid": "$NVMF_PORT", 00:17:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:44.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:44.589 "hdgst": ${hdgst:-false}, 00:17:44.589 "ddgst": ${ddgst:-false} 00:17:44.589 }, 00:17:44.589 "method": "bdev_nvme_attach_controller" 00:17:44.589 } 00:17:44.589 EOF 00:17:44.589 )") 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 892168 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:44.589 "params": { 00:17:44.589 "name": "Nvme1", 00:17:44.589 "trtype": "tcp", 00:17:44.589 "traddr": "10.0.0.2", 00:17:44.589 "adrfam": "ipv4", 00:17:44.589 "trsvcid": "4420", 00:17:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.589 "hdgst": false, 00:17:44.589 "ddgst": false 00:17:44.589 }, 00:17:44.589 "method": "bdev_nvme_attach_controller" 00:17:44.589 }' 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:44.589 "params": { 00:17:44.589 "name": "Nvme1", 00:17:44.589 "trtype": "tcp", 00:17:44.589 "traddr": "10.0.0.2", 00:17:44.589 "adrfam": "ipv4", 00:17:44.589 "trsvcid": "4420", 00:17:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.589 "hdgst": false, 00:17:44.589 "ddgst": false 00:17:44.589 }, 00:17:44.589 "method": "bdev_nvme_attach_controller" 00:17:44.589 }' 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:44.589 "params": { 00:17:44.589 "name": "Nvme1", 00:17:44.589 "trtype": "tcp", 00:17:44.589 "traddr": "10.0.0.2", 00:17:44.589 "adrfam": "ipv4", 00:17:44.589 "trsvcid": "4420", 00:17:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.589 "hdgst": false, 00:17:44.589 "ddgst": false 00:17:44.589 }, 00:17:44.589 "method": "bdev_nvme_attach_controller" 00:17:44.589 }' 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:17:44.589 21:24:19 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:44.589 "params": { 00:17:44.589 "name": "Nvme1", 00:17:44.589 "trtype": "tcp", 00:17:44.589 "traddr": "10.0.0.2", 00:17:44.589 "adrfam": "ipv4", 00:17:44.589 "trsvcid": "4420", 00:17:44.589 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:44.589 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:44.589 "hdgst": false, 00:17:44.589 "ddgst": false 00:17:44.589 }, 00:17:44.589 "method": "bdev_nvme_attach_controller" 00:17:44.589 }' 00:17:44.847 [2024-07-11 21:24:19.389030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:44.847 [2024-07-11 21:24:19.389029] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:44.847 [2024-07-11 21:24:19.389030] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:44.847 [2024-07-11 21:24:19.389047] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:44.847 [2024-07-11 21:24:19.389134] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 21:24:19.389135] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 21:24:19.389135] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-07-11 21:24:19.389136] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:17:44.847 --proc-type=auto ] 00:17:44.847 --proc-type=auto ] 00:17:44.847 --proc-type=auto ] 00:17:44.847 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.847 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.847 [2024-07-11 21:24:19.560591] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.105 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.105 [2024-07-11 21:24:19.635890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:45.105 [2024-07-11 21:24:19.660609] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.105 EAL: No free 2048 kB hugepages reported on node 1 00:17:45.105 [2024-07-11 21:24:19.734964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:45.105 [2024-07-11 21:24:19.759052] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.105 [2024-07-11 21:24:19.834917] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:45.105 [2024-07-11 21:24:19.837369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:45.362 [2024-07-11 21:24:19.904667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:17:45.362 Running I/O for 1 seconds... 00:17:45.362 Running I/O for 1 seconds... 00:17:45.362 Running I/O for 1 seconds... 00:17:45.362 Running I/O for 1 seconds... 00:17:46.294 00:17:46.294 Latency(us) 00:17:46.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.294 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:17:46.294 Nvme1n1 : 1.00 194348.76 759.17 0.00 0.00 656.00 273.07 1080.13 00:17:46.294 =================================================================================================================== 00:17:46.294 Total : 194348.76 759.17 0.00 0.00 656.00 273.07 1080.13 00:17:46.294 00:17:46.294 Latency(us) 00:17:46.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.294 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:17:46.294 Nvme1n1 : 1.01 10753.58 42.01 0.00 0.00 11848.18 8107.05 19709.35 00:17:46.294 =================================================================================================================== 00:17:46.294 Total : 10753.58 42.01 0.00 0.00 11848.18 8107.05 19709.35 00:17:46.294 00:17:46.294 Latency(us) 00:17:46.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.294 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:17:46.294 Nvme1n1 : 1.01 6412.45 25.05 0.00 0.00 19888.92 6359.42 34758.35 00:17:46.294 =================================================================================================================== 00:17:46.294 Total : 6412.45 25.05 0.00 0.00 19888.92 6359.42 34758.35 00:17:46.552 00:17:46.552 Latency(us) 00:17:46.552 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:46.552 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:17:46.552 Nvme1n1 : 1.01 9043.84 35.33 0.00 0.00 14094.51 7136.14 26214.40 00:17:46.552 =================================================================================================================== 00:17:46.552 Total : 9043.84 35.33 0.00 0.00 14094.51 7136.14 26214.40 00:17:46.809 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 892169 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 892172 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 892174 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:46.810 rmmod nvme_tcp 00:17:46.810 rmmod nvme_fabrics 00:17:46.810 rmmod nvme_keyring 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 892133 ']' 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 892133 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 892133 ']' 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 892133 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 892133 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 892133' 00:17:46.810 killing process with pid 892133 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 892133 00:17:46.810 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 892133 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:47.092 21:24:21 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.631 21:24:23 nvmf_tcp.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:49.631 00:17:49.631 real 0m7.119s 00:17:49.631 user 0m15.022s 00:17:49.631 sys 0m3.855s 00:17:49.631 21:24:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:49.631 21:24:23 nvmf_tcp.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:17:49.631 ************************************ 00:17:49.631 END TEST nvmf_bdev_io_wait 00:17:49.631 ************************************ 00:17:49.631 21:24:23 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:17:49.631 21:24:23 nvmf_tcp -- nvmf/nvmf.sh@51 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:49.631 21:24:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:17:49.631 21:24:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:49.631 21:24:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:17:49.631 ************************************ 00:17:49.631 START TEST nvmf_queue_depth 00:17:49.631 ************************************ 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:17:49.631 * Looking for test storage... 00:17:49.631 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:49.631 21:24:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:17:49.632 21:24:23 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:51.004 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:51.004 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:51.004 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.004 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:51.005 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:51.005 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:51.262 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:51.262 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:17:51.262 00:17:51.262 --- 10.0.0.2 ping statistics --- 00:17:51.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.262 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:51.262 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:51.262 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.054 ms 00:17:51.262 00:17:51.262 --- 10.0.0.1 ping statistics --- 00:17:51.262 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:51.262 rtt min/avg/max/mdev = 0.054/0.054/0.054/0.000 ms 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=894380 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 894380 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 894380 ']' 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:51.262 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.263 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:51.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:51.263 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.263 21:24:25 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.263 [2024-07-11 21:24:25.980822] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:51.263 [2024-07-11 21:24:25.980914] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:51.263 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.521 [2024-07-11 21:24:26.050979] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.521 [2024-07-11 21:24:26.140846] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:51.521 [2024-07-11 21:24:26.140912] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:51.521 [2024-07-11 21:24:26.140938] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:51.521 [2024-07-11 21:24:26.140953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:51.521 [2024-07-11 21:24:26.140965] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:51.521 [2024-07-11 21:24:26.140995] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.521 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.521 [2024-07-11 21:24:26.288871] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.780 Malloc0 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.780 [2024-07-11 21:24:26.346422] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=894410 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 894410 /var/tmp/bdevperf.sock 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 894410 ']' 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:51.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.780 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:51.780 [2024-07-11 21:24:26.393144] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:17:51.780 [2024-07-11 21:24:26.393210] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid894410 ] 00:17:51.780 EAL: No free 2048 kB hugepages reported on node 1 00:17:51.780 [2024-07-11 21:24:26.457006] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:51.780 [2024-07-11 21:24:26.547934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.038 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.038 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:17:52.038 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:17:52.038 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.038 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:17:52.298 NVMe0n1 00:17:52.298 21:24:26 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.298 21:24:26 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:17:52.298 Running I/O for 10 seconds... 00:18:04.510 00:18:04.510 Latency(us) 00:18:04.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.510 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:18:04.510 Verification LBA range: start 0x0 length 0x4000 00:18:04.510 NVMe0n1 : 10.07 8542.60 33.37 0.00 0.00 119348.55 9563.40 76507.21 00:18:04.510 =================================================================================================================== 00:18:04.510 Total : 8542.60 33.37 0.00 0.00 119348.55 9563.40 76507.21 00:18:04.510 0 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 894410 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 894410 ']' 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 894410 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 894410 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 894410' 00:18:04.510 killing process with pid 894410 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 894410 00:18:04.510 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.510 00:18:04.510 Latency(us) 00:18:04.510 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.510 =================================================================================================================== 00:18:04.510 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 894410 00:18:04.510 21:24:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:04.511 rmmod nvme_tcp 00:18:04.511 rmmod nvme_fabrics 00:18:04.511 rmmod nvme_keyring 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 894380 ']' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 894380 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 894380 ']' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 894380 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 894380 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 894380' 00:18:04.511 killing process with pid 894380 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 894380 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 894380 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:04.511 21:24:37 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.075 21:24:39 nvmf_tcp.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:05.075 00:18:05.075 real 0m15.924s 00:18:05.075 user 0m22.584s 00:18:05.075 sys 0m2.925s 00:18:05.075 21:24:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:05.075 21:24:39 nvmf_tcp.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:18:05.075 ************************************ 00:18:05.075 END TEST nvmf_queue_depth 00:18:05.075 ************************************ 00:18:05.075 21:24:39 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:05.075 21:24:39 nvmf_tcp -- nvmf/nvmf.sh@52 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:05.075 21:24:39 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:05.075 21:24:39 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:05.075 21:24:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:05.075 ************************************ 00:18:05.075 START TEST nvmf_target_multipath 00:18:05.075 ************************************ 00:18:05.075 21:24:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:18:05.334 * Looking for test storage... 00:18:05.334 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:05.334 21:24:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:18:05.335 21:24:39 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:07.235 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:07.236 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:07.236 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:07.236 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:07.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:07.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:07.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:18:07.236 00:18:07.236 --- 10.0.0.2 ping statistics --- 00:18:07.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.236 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:07.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:07.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:18:07.236 00:18:07.236 --- 10.0.0.1 ping statistics --- 00:18:07.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:07.236 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:18:07.236 only one NIC for nvmf test 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:07.236 21:24:41 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:07.236 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:07.236 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:07.236 rmmod nvme_tcp 00:18:07.494 rmmod nvme_fabrics 00:18:07.494 rmmod nvme_keyring 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:07.494 21:24:42 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:09.392 00:18:09.392 real 0m4.298s 00:18:09.392 user 0m0.817s 00:18:09.392 sys 0m1.474s 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:09.392 21:24:44 nvmf_tcp.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 ************************************ 00:18:09.392 END TEST nvmf_target_multipath 00:18:09.392 ************************************ 00:18:09.392 21:24:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:09.392 21:24:44 nvmf_tcp -- nvmf/nvmf.sh@53 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:09.392 21:24:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:09.392 21:24:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:09.392 21:24:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:09.392 ************************************ 00:18:09.392 START TEST nvmf_zcopy 00:18:09.392 ************************************ 00:18:09.392 21:24:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:18:09.651 * Looking for test storage... 00:18:09.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:18:09.651 21:24:44 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:11.551 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.551 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:11.552 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:11.552 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:11.552 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:11.552 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:11.810 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:11.810 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:18:11.810 00:18:11.810 --- 10.0.0.2 ping statistics --- 00:18:11.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.810 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:11.810 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:11.810 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.127 ms 00:18:11.810 00:18:11.810 --- 10.0.0.1 ping statistics --- 00:18:11.810 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:11.810 rtt min/avg/max/mdev = 0.127/0.127/0.127/0.000 ms 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=899569 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 899569 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 899569 ']' 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:11.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:11.810 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:11.810 [2024-07-11 21:24:46.413169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:11.810 [2024-07-11 21:24:46.413251] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:11.810 EAL: No free 2048 kB hugepages reported on node 1 00:18:11.810 [2024-07-11 21:24:46.482963] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:11.810 [2024-07-11 21:24:46.576014] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:11.810 [2024-07-11 21:24:46.576074] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:11.810 [2024-07-11 21:24:46.576091] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:11.810 [2024-07-11 21:24:46.576104] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:11.810 [2024-07-11 21:24:46.576124] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:11.810 [2024-07-11 21:24:46.576155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 [2024-07-11 21:24:46.727886] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 [2024-07-11 21:24:46.744113] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 malloc0 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:12.069 { 00:18:12.069 "params": { 00:18:12.069 "name": "Nvme$subsystem", 00:18:12.069 "trtype": "$TEST_TRANSPORT", 00:18:12.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:12.069 "adrfam": "ipv4", 00:18:12.069 "trsvcid": "$NVMF_PORT", 00:18:12.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:12.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:12.069 "hdgst": ${hdgst:-false}, 00:18:12.069 "ddgst": ${ddgst:-false} 00:18:12.069 }, 00:18:12.069 "method": "bdev_nvme_attach_controller" 00:18:12.069 } 00:18:12.069 EOF 00:18:12.069 )") 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:12.069 21:24:46 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:12.069 "params": { 00:18:12.069 "name": "Nvme1", 00:18:12.069 "trtype": "tcp", 00:18:12.069 "traddr": "10.0.0.2", 00:18:12.069 "adrfam": "ipv4", 00:18:12.069 "trsvcid": "4420", 00:18:12.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.069 "hdgst": false, 00:18:12.069 "ddgst": false 00:18:12.069 }, 00:18:12.069 "method": "bdev_nvme_attach_controller" 00:18:12.069 }' 00:18:12.069 [2024-07-11 21:24:46.826765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:12.069 [2024-07-11 21:24:46.826847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid899597 ] 00:18:12.327 EAL: No free 2048 kB hugepages reported on node 1 00:18:12.327 [2024-07-11 21:24:46.887326] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.327 [2024-07-11 21:24:46.975344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:12.585 Running I/O for 10 seconds... 00:18:22.594 00:18:22.594 Latency(us) 00:18:22.594 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:22.594 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:18:22.594 Verification LBA range: start 0x0 length 0x1000 00:18:22.594 Nvme1n1 : 10.02 5860.13 45.78 0.00 0.00 21781.59 3446.71 29903.83 00:18:22.594 =================================================================================================================== 00:18:22.594 Total : 5860.13 45.78 0.00 0.00 21781.59 3446.71 29903.83 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=900787 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:22.853 { 00:18:22.853 "params": { 00:18:22.853 "name": "Nvme$subsystem", 00:18:22.853 "trtype": "$TEST_TRANSPORT", 00:18:22.853 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:22.853 "adrfam": "ipv4", 00:18:22.853 "trsvcid": "$NVMF_PORT", 00:18:22.853 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:22.853 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:22.853 "hdgst": ${hdgst:-false}, 00:18:22.853 "ddgst": ${ddgst:-false} 00:18:22.853 }, 00:18:22.853 "method": "bdev_nvme_attach_controller" 00:18:22.853 } 00:18:22.853 EOF 00:18:22.853 )") 00:18:22.853 [2024-07-11 21:24:57.479068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.479128] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:18:22.853 21:24:57 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:22.853 "params": { 00:18:22.853 "name": "Nvme1", 00:18:22.853 "trtype": "tcp", 00:18:22.853 "traddr": "10.0.0.2", 00:18:22.853 "adrfam": "ipv4", 00:18:22.853 "trsvcid": "4420", 00:18:22.853 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:22.853 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:22.853 "hdgst": false, 00:18:22.853 "ddgst": false 00:18:22.853 }, 00:18:22.853 "method": "bdev_nvme_attach_controller" 00:18:22.853 }' 00:18:22.853 [2024-07-11 21:24:57.487008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.487054] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 [2024-07-11 21:24:57.495038] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.495065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 [2024-07-11 21:24:57.503060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.503086] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 [2024-07-11 21:24:57.511080] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.511106] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 [2024-07-11 21:24:57.519101] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.519125] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 [2024-07-11 21:24:57.521210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:22.853 [2024-07-11 21:24:57.521278] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid900787 ] 00:18:22.853 [2024-07-11 21:24:57.527136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.527161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.853 [2024-07-11 21:24:57.535155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.853 [2024-07-11 21:24:57.535179] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.543177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.543201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.551198] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.551223] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.854 [2024-07-11 21:24:57.559219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.559244] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.567242] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.567267] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.575264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.575290] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.583287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.583311] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.590958] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.854 [2024-07-11 21:24:57.591292] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.591312] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.599367] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.599409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.607384] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.607424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.615380] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.615415] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:22.854 [2024-07-11 21:24:57.623401] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:22.854 [2024-07-11 21:24:57.623426] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.631422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.631446] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.639444] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.639468] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.647490] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.647529] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.655531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.655569] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.663507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.663532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.671531] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.671556] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.679551] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.679576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.687572] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.687597] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.688260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:23.113 [2024-07-11 21:24:57.695595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.695620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.703632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.703663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.711665] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.711703] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.719690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.719731] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.727720] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.727774] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.735772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.735836] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.743772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.743827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.751788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.751845] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.759783] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.759823] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.767846] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.767884] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.775867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.775904] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.783883] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.783923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.791874] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.791895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.113 [2024-07-11 21:24:57.799889] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.113 [2024-07-11 21:24:57.799911] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.807924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.807949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.815975] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.815999] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.823973] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.823996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.831995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.832018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.840045] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.840071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.848061] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.848085] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.856077] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.856114] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.864122] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.864152] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 [2024-07-11 21:24:57.872136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.872163] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.114 Running I/O for 5 seconds... 00:18:23.114 [2024-07-11 21:24:57.880152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.114 [2024-07-11 21:24:57.880191] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.894703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.894736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.906578] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.906610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.918033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.918079] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.929953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.929981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.941933] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.941961] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.955211] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.955242] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.965927] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.965954] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.977646] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.372 [2024-07-11 21:24:57.977676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.372 [2024-07-11 21:24:57.989286] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:57.989316] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.002504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.002534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.013408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.013438] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.024628] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.024659] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.037826] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.037854] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.048283] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.048313] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.060100] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.060132] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.071560] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.071590] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.083317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.083347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.094936] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.094963] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.106315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.106345] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.117912] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.117940] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.130022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.130066] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.373 [2024-07-11 21:24:58.141701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.373 [2024-07-11 21:24:58.141729] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.631 [2024-07-11 21:24:58.153588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.631 [2024-07-11 21:24:58.153626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.631 [2024-07-11 21:24:58.164949] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.164976] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.176763] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.176806] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.187854] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.187882] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.199577] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.199606] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.210963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.210991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.222630] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.222660] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.234102] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.234133] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.245591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.245622] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.257381] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.257411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.269022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.269050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.280319] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.280349] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.293417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.293448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.304411] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.304441] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.315995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.316023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.327475] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.327504] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.340773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.340803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.351463] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.351493] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.362953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.362981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.374506] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.374543] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.386155] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.386186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.632 [2024-07-11 21:24:58.397471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.632 [2024-07-11 21:24:58.397501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.408609] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.408640] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.420010] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.420038] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.431423] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.431453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.442709] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.442736] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.453191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.453219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.463922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.463950] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.475999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.476028] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.488252] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.488283] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.501703] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.501733] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.511999] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.512026] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.522402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.522429] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.533774] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.533818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.546046] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.546077] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.557922] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.557949] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.569559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.569599] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.581030] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.581083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.593056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.593093] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.605978] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.606006] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.617295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.617325] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.629033] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.629083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.640940] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.640968] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:23.891 [2024-07-11 21:24:58.654417] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:23.891 [2024-07-11 21:24:58.654448] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.149 [2024-07-11 21:24:58.665387] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.665418] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.676871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.676900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.688467] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.688497] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.700336] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.700367] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.711825] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.711852] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.723605] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.723635] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.735095] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.735126] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.746961] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.746989] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.758246] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.758276] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.769911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.769938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.781638] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.781667] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.794963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.794990] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.806051] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.806078] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.817235] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.817274] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.830447] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.830477] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.841453] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.841483] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.852990] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.853018] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.864484] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.864514] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.876585] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.876616] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.888209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.888240] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.901632] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.901662] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.150 [2024-07-11 21:24:58.912731] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.150 [2024-07-11 21:24:58.912771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.924422] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.924453] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.936159] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.936189] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.947917] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.947945] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.958995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.959022] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.970776] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.970818] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.982501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.982532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:58.993958] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:58.993986] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.005215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.005245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.016658] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.016688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.028215] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.028246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.039769] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.039820] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.051180] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.051211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.062591] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.062621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.074117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.074148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.085691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.085721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.097361] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.097391] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.108811] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.108839] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.120088] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.120118] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.131487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.131517] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.143078] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.143108] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.154427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.154457] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.166504] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.166534] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.409 [2024-07-11 21:24:59.178528] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.409 [2024-07-11 21:24:59.178558] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.190233] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.190263] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.201694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.201724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.213202] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.213232] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.224807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.224834] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.236581] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.236611] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.247808] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.247835] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.261316] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.261347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.272307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.272337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.283882] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.283909] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.295130] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.295161] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.306622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.306652] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.318382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.318412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.329821] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.329849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.341168] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.341198] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.353021] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.353048] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.366317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.366348] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.377204] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.377235] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.388816] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.388843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.400406] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.400435] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.668 [2024-07-11 21:24:59.411993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.668 [2024-07-11 21:24:59.412020] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.669 [2024-07-11 21:24:59.423451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.669 [2024-07-11 21:24:59.423481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.669 [2024-07-11 21:24:59.434745] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.669 [2024-07-11 21:24:59.434781] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.445621] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.445651] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.457306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.457336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.470559] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.470589] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.480976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.481004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.493040] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.493071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.504631] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.504661] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.518012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.518039] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.528956] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.528983] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.540412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.540443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.553722] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.553763] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.564279] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.564309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.575595] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.575626] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.588334] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.588362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.597509] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.597536] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.609408] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.609439] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.621145] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.621177] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.927 [2024-07-11 21:24:59.632804] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.927 [2024-07-11 21:24:59.632830] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.928 [2024-07-11 21:24:59.643997] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.928 [2024-07-11 21:24:59.644025] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.928 [2024-07-11 21:24:59.655471] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.928 [2024-07-11 21:24:59.655501] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.928 [2024-07-11 21:24:59.666060] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.928 [2024-07-11 21:24:59.666087] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.928 [2024-07-11 21:24:59.678561] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.928 [2024-07-11 21:24:59.678592] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:24.928 [2024-07-11 21:24:59.689830] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:24.928 [2024-07-11 21:24:59.689858] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.701278] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.701309] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.712451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.712481] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.724205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.724236] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.735670] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.735698] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.748295] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.748323] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.758660] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.758688] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.769445] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.769473] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.780271] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.780299] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.791043] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.791071] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.803427] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.803455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.813555] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.813582] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.823689] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.823717] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.834009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.834037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.844588] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.844615] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.857031] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.857059] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.866566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.866593] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.877764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.877792] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.888173] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.888201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.898730] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.898773] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.911451] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.911478] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.921556] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.921583] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.932152] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.932180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.942563] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.942591] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.186 [2024-07-11 21:24:59.952928] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.186 [2024-07-11 21:24:59.952955] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:24:59.963027] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:24:59.963055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:24:59.973697] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:24:59.973724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:24:59.986307] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:24:59.986335] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:24:59.995738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:24:59.995775] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.007398] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.007428] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.018019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.018046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.034012] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.034065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.042995] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.043031] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.054022] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.054055] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.064788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.064819] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.075454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.075482] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.086054] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.086082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.096355] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.096382] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.106807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.106843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.117335] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.117362] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.130305] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.130333] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.140222] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.140249] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.150416] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.150444] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.160685] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.160712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.170896] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.170924] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.181574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.181602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.193677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.193704] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.203762] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.203789] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.444 [2024-07-11 21:25:00.213622] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.444 [2024-07-11 21:25:00.213650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.223733] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.223770] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.234015] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.234042] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.244341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.244368] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.254304] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.254332] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.264593] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.264621] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.275177] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.275204] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.285800] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.285827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.296513] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.296541] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.306871] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.306908] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.316986] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.317012] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.327535] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.327562] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.337784] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.337811] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.348017] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.348044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.359042] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.359070] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.371969] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.371997] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.382929] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.382956] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.394690] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.394721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.405771] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.405801] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.416813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.416840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.702 [2024-07-11 21:25:00.428200] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.702 [2024-07-11 21:25:00.428230] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.703 [2024-07-11 21:25:00.439634] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.703 [2024-07-11 21:25:00.439664] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.703 [2024-07-11 21:25:00.451109] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.703 [2024-07-11 21:25:00.451141] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.703 [2024-07-11 21:25:00.462678] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.703 [2024-07-11 21:25:00.462708] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.473914] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.473942] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.487330] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.487360] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.497867] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.497895] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.509887] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.509914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.521558] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.521596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.533322] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.533352] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.544966] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.544994] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.558412] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.558443] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.569812] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.569840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.581317] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.581347] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.592717] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.592748] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.606393] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.606424] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.617545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.617576] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.628875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.628903] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.640127] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.640157] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.651530] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.651561] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.662878] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.662905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.674339] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.674369] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.685718] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.960 [2024-07-11 21:25:00.685749] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.960 [2024-07-11 21:25:00.699181] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.961 [2024-07-11 21:25:00.699213] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.961 [2024-07-11 21:25:00.709976] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.961 [2024-07-11 21:25:00.710004] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:25.961 [2024-07-11 21:25:00.721913] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:25.961 [2024-07-11 21:25:00.721941] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.219 [2024-07-11 21:25:00.732657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.219 [2024-07-11 21:25:00.732684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.219 [2024-07-11 21:25:00.743059] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.219 [2024-07-11 21:25:00.743099] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.219 [2024-07-11 21:25:00.754382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.219 [2024-07-11 21:25:00.754412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.219 [2024-07-11 21:25:00.767872] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.219 [2024-07-11 21:25:00.767900] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.219 [2024-07-11 21:25:00.778208] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.219 [2024-07-11 21:25:00.778239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.789675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.789705] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.800972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.801000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.811739] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.811778] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.822314] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.822341] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.834844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.834871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.845195] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.845226] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.856806] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.856849] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.868148] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.868178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.879489] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.879520] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.890819] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.890847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.901834] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.901862] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.913270] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.913301] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.924952] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.924981] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.936764] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.936809] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.948500] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.948531] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.960241] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.960272] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.971619] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.971650] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.220 [2024-07-11 21:25:00.983315] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.220 [2024-07-11 21:25:00.983346] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:00.994835] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:00.994863] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.006143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.006170] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.017056] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.017100] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.028841] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.028868] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.039963] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.039991] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.050842] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.050870] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.062092] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.062123] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.073342] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.073372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.086580] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.086610] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.096264] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.096294] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.108023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.108050] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.119487] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.119518] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.130877] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.130905] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.144136] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.144167] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.154738] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.154777] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.166454] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.166484] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.178441] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.178471] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.190028] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.190072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.201306] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.201336] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.212766] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.212810] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.224187] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.224217] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.235996] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.236023] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.478 [2024-07-11 21:25:01.247726] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.478 [2024-07-11 21:25:01.247765] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.259537] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.259567] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.271255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.271285] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.282649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.282679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.294008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.294035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.305728] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.305768] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.317549] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.317579] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.329251] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.329281] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.342590] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.342620] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.353433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.353463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.364772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.364815] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.377924] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.377952] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.387886] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.387914] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.399866] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.399893] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.411829] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.411856] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.425626] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.425656] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.436379] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.436409] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.448110] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.448140] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.459348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.459378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.471323] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.471354] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.482772] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.482803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.494656] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.494686] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.737 [2024-07-11 21:25:01.506055] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.737 [2024-07-11 21:25:01.506082] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.519545] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.519575] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.530654] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.530684] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.541894] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.541921] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.553495] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.553525] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.564861] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.564889] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.576521] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.576551] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.995 [2024-07-11 21:25:01.588052] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.995 [2024-07-11 21:25:01.588083] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.601691] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.601721] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.612491] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.612528] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.623287] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.623317] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.634761] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.634805] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.647888] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.647915] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.658460] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.658490] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.669436] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.669466] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.682890] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.682918] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.693657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.693687] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.704486] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.704516] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.715544] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.715574] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.726566] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.726596] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.737473] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.737503] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.749009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.749036] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:26.996 [2024-07-11 21:25:01.762085] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:26.996 [2024-07-11 21:25:01.762116] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.772308] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.772337] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.784107] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.784138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.795820] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.795847] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.809239] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.809270] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.820433] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.820463] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.831675] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.831712] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.843129] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.843160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.854343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.854374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.865900] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.865928] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.876897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.254 [2024-07-11 21:25:01.876925] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.254 [2024-07-11 21:25:01.888132] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.888160] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.900138] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.900165] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.910140] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.910169] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.921773] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.921803] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.933216] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.933246] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.944648] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.944679] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.955724] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.955760] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.965635] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.965663] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.975844] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.975871] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.987476] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.987507] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:01.998651] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:01.998682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:02.010284] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:02.010314] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.255 [2024-07-11 21:25:02.021981] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.255 [2024-07-11 21:25:02.022009] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.033570] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.033602] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.044507] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.044548] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.055972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.056000] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.069373] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.069404] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.080343] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.080374] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.091477] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.091508] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.102979] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.103007] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.114205] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.114237] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.127402] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.127433] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.138333] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.138364] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.149644] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.149675] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.163695] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.163726] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.174671] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.174701] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.186255] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.186287] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.198149] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.198180] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.513 [2024-07-11 21:25:02.211700] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.513 [2024-07-11 21:25:02.211730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.514 [2024-07-11 21:25:02.222810] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.514 [2024-07-11 21:25:02.222837] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.514 [2024-07-11 21:25:02.234090] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.514 [2024-07-11 21:25:02.234120] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.514 [2024-07-11 21:25:02.245943] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.514 [2024-07-11 21:25:02.245971] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.514 [2024-07-11 21:25:02.257461] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.514 [2024-07-11 21:25:02.257492] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.514 [2024-07-11 21:25:02.269019] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.514 [2024-07-11 21:25:02.269053] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.514 [2024-07-11 21:25:02.280736] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.514 [2024-07-11 21:25:02.280771] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.291950] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.291978] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.303625] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.303655] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.315008] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.315035] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.326156] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.326186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.337382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.337412] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.349023] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.349065] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.360497] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.360527] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.372288] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.372318] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.383737] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.383776] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.397425] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.397455] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.408574] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.408605] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.420094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.420124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.431382] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.431411] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.442937] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.442964] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.454006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.454049] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.465362] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.465392] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.772 [2024-07-11 21:25:02.479188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.772 [2024-07-11 21:25:02.479219] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.773 [2024-07-11 21:25:02.495143] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.773 [2024-07-11 21:25:02.495186] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.773 [2024-07-11 21:25:02.506350] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.773 [2024-07-11 21:25:02.506380] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.773 [2024-07-11 21:25:02.518009] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.773 [2024-07-11 21:25:02.518037] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.773 [2024-07-11 21:25:02.529501] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.773 [2024-07-11 21:25:02.529532] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:27.773 [2024-07-11 21:25:02.540721] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:27.773 [2024-07-11 21:25:02.540759] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.031 [2024-07-11 21:25:02.553770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.031 [2024-07-11 21:25:02.553800] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.031 [2024-07-11 21:25:02.564094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.031 [2024-07-11 21:25:02.564124] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.031 [2024-07-11 21:25:02.576348] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.031 [2024-07-11 21:25:02.576378] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.031 [2024-07-11 21:25:02.587459] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.031 [2024-07-11 21:25:02.587489] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.031 [2024-07-11 21:25:02.599410] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.031 [2024-07-11 21:25:02.599440] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.031 [2024-07-11 21:25:02.610970] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.031 [2024-07-11 21:25:02.610998] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.623188] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.623220] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.634592] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.634623] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.646225] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.646256] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.657934] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.657962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.669910] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.669938] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.681449] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.681479] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.692813] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.692840] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.704234] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.704264] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.715921] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.715948] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.729341] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.729372] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.740108] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.740138] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.751677] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.751707] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.763419] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.763451] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.774838] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.774865] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.786214] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.786245] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.032 [2024-07-11 21:25:02.797694] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.032 [2024-07-11 21:25:02.797724] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.809018] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.809046] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.820616] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.820647] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.832175] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.832206] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.843512] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.843544] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.855209] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.855239] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.866701] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.866730] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.877875] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.877902] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.889117] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.889147] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.898420] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.898449] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 00:18:28.290 Latency(us) 00:18:28.290 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:28.290 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:18:28.290 Nvme1n1 : 5.01 11219.35 87.65 0.00 0.00 11393.00 5024.43 22039.51 00:18:28.290 =================================================================================================================== 00:18:28.290 Total : 11219.35 87.65 0.00 0.00 11393.00 5024.43 22039.51 00:18:28.290 [2024-07-11 21:25:02.903649] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.903676] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.911657] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.911682] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.919704] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.919741] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.927770] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.927827] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.935788] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.935843] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.943807] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.943859] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.951822] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.951872] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.959879] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.959933] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.967869] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.967923] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.975897] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.975951] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.983911] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.983962] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.991946] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:02.991996] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:02.999953] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:03.000002] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:03.007972] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:03.008021] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:03.015993] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:03.016052] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:03.024013] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.290 [2024-07-11 21:25:03.024073] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.290 [2024-07-11 21:25:03.032016] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.291 [2024-07-11 21:25:03.032072] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.291 [2024-07-11 21:25:03.040006] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.291 [2024-07-11 21:25:03.040044] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.291 [2024-07-11 21:25:03.048068] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.291 [2024-07-11 21:25:03.048113] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.291 [2024-07-11 21:25:03.056094] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.291 [2024-07-11 21:25:03.056143] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.064137] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.064178] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.072120] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.072148] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.080158] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.080201] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.088186] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.088234] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.096219] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.096261] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.104172] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.104192] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 [2024-07-11 21:25:03.112191] subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:18:28.548 [2024-07-11 21:25:03.112211] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:28.548 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (900787) - No such process 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 900787 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.548 delay0 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:28.548 21:25:03 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:18:28.548 EAL: No free 2048 kB hugepages reported on node 1 00:18:28.548 [2024-07-11 21:25:03.236150] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:18:35.104 Initializing NVMe Controllers 00:18:35.104 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:35.104 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:35.104 Initialization complete. Launching workers. 00:18:35.104 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 242, failed: 21971 00:18:35.104 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22087, failed to submit 126 00:18:35.104 success 22018, unsuccess 69, failed 0 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.104 rmmod nvme_tcp 00:18:35.104 rmmod nvme_fabrics 00:18:35.104 rmmod nvme_keyring 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 899569 ']' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 899569 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 899569 ']' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 899569 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 899569 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 899569' 00:18:35.104 killing process with pid 899569 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 899569 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 899569 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:35.104 21:25:09 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.628 21:25:11 nvmf_tcp.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:37.628 00:18:37.628 real 0m27.662s 00:18:37.628 user 0m40.151s 00:18:37.628 sys 0m8.951s 00:18:37.628 21:25:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:37.628 21:25:11 nvmf_tcp.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 ************************************ 00:18:37.628 END TEST nvmf_zcopy 00:18:37.628 ************************************ 00:18:37.628 21:25:11 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:37.628 21:25:11 nvmf_tcp -- nvmf/nvmf.sh@54 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.628 21:25:11 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:37.628 21:25:11 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:37.628 21:25:11 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:37.628 ************************************ 00:18:37.628 START TEST nvmf_nmic 00:18:37.628 ************************************ 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:18:37.628 * Looking for test storage... 00:18:37.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:18:37.628 21:25:11 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:39.554 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:39.554 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:39.554 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:39.554 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:39.554 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:39.555 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:39.555 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:39.555 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:39.555 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:39.555 21:25:13 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:39.555 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:39.555 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.234 ms 00:18:39.555 00:18:39.555 --- 10.0.0.2 ping statistics --- 00:18:39.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.555 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:39.555 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:39.555 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.173 ms 00:18:39.555 00:18:39.555 --- 10.0.0.1 ping statistics --- 00:18:39.555 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:39.555 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=904164 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 904164 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 904164 ']' 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:39.555 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:39.555 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.555 [2024-07-11 21:25:14.090486] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:39.555 [2024-07-11 21:25:14.090587] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:39.555 EAL: No free 2048 kB hugepages reported on node 1 00:18:39.555 [2024-07-11 21:25:14.169022] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:39.555 [2024-07-11 21:25:14.268954] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:39.555 [2024-07-11 21:25:14.269024] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:39.555 [2024-07-11 21:25:14.269041] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:39.555 [2024-07-11 21:25:14.269055] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:39.555 [2024-07-11 21:25:14.269066] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:39.555 [2024-07-11 21:25:14.269130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.555 [2024-07-11 21:25:14.269161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.555 [2024-07-11 21:25:14.269214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:39.555 [2024-07-11 21:25:14.269217] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 [2024-07-11 21:25:14.425617] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 Malloc0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 [2024-07-11 21:25:14.479311] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:18:39.814 test case1: single bdev can't be used in multiple subsystems 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 [2024-07-11 21:25:14.503164] bdev.c:8078:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:18:39.814 [2024-07-11 21:25:14.503193] subsystem.c:2083:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:18:39.814 [2024-07-11 21:25:14.503209] nvmf_rpc.c:1546:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:39.814 request: 00:18:39.814 { 00:18:39.814 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:18:39.814 "namespace": { 00:18:39.814 "bdev_name": "Malloc0", 00:18:39.814 "no_auto_visible": false 00:18:39.814 }, 00:18:39.814 "method": "nvmf_subsystem_add_ns", 00:18:39.814 "req_id": 1 00:18:39.814 } 00:18:39.814 Got JSON-RPC error response 00:18:39.814 response: 00:18:39.814 { 00:18:39.814 "code": -32602, 00:18:39.814 "message": "Invalid parameters" 00:18:39.814 } 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:18:39.814 Adding namespace failed - expected result. 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:18:39.814 test case2: host connect to nvmf target in multiple paths 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:39.814 [2024-07-11 21:25:14.511278] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:39.814 21:25:14 nvmf_tcp.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:40.751 21:25:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:18:41.321 21:25:15 nvmf_tcp.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:18:41.321 21:25:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:18:41.321 21:25:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:41.321 21:25:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:18:41.321 21:25:15 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:18:43.224 21:25:17 nvmf_tcp.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:43.224 [global] 00:18:43.224 thread=1 00:18:43.224 invalidate=1 00:18:43.224 rw=write 00:18:43.224 time_based=1 00:18:43.224 runtime=1 00:18:43.224 ioengine=libaio 00:18:43.224 direct=1 00:18:43.224 bs=4096 00:18:43.224 iodepth=1 00:18:43.224 norandommap=0 00:18:43.224 numjobs=1 00:18:43.224 00:18:43.224 verify_dump=1 00:18:43.224 verify_backlog=512 00:18:43.224 verify_state_save=0 00:18:43.224 do_verify=1 00:18:43.224 verify=crc32c-intel 00:18:43.224 [job0] 00:18:43.224 filename=/dev/nvme0n1 00:18:43.224 Could not set queue depth (nvme0n1) 00:18:43.482 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:43.482 fio-3.35 00:18:43.482 Starting 1 thread 00:18:44.419 00:18:44.419 job0: (groupid=0, jobs=1): err= 0: pid=904760: Thu Jul 11 21:25:19 2024 00:18:44.419 read: IOPS=20, BW=83.1KiB/s (85.1kB/s)(84.0KiB/1011msec) 00:18:44.419 slat (nsec): min=7612, max=32580, avg=28842.62, stdev=7581.86 00:18:44.419 clat (usec): min=40887, max=42018, avg=41436.57, stdev=519.19 00:18:44.419 lat (usec): min=40919, max=42050, avg=41465.41, stdev=518.27 00:18:44.419 clat percentiles (usec): 00:18:44.419 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:44.419 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[42206], 00:18:44.419 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:44.419 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:44.419 | 99.99th=[42206] 00:18:44.419 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:18:44.419 slat (usec): min=7, max=29785, avg=76.43, stdev=1315.55 00:18:44.419 clat (usec): min=140, max=310, avg=191.04, stdev=19.92 00:18:44.419 lat (usec): min=147, max=30036, avg=267.47, stdev=1318.42 00:18:44.419 clat percentiles (usec): 00:18:44.419 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 174], 00:18:44.419 | 30.00th=[ 186], 40.00th=[ 190], 50.00th=[ 194], 60.00th=[ 198], 00:18:44.419 | 70.00th=[ 202], 80.00th=[ 206], 90.00th=[ 210], 95.00th=[ 217], 00:18:44.419 | 99.00th=[ 251], 99.50th=[ 273], 99.90th=[ 310], 99.95th=[ 310], 00:18:44.419 | 99.99th=[ 310] 00:18:44.419 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:44.419 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:44.419 lat (usec) : 250=94.93%, 500=1.13% 00:18:44.419 lat (msec) : 50=3.94% 00:18:44.419 cpu : usr=0.59%, sys=1.19%, ctx=536, majf=0, minf=2 00:18:44.419 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:44.419 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.419 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:44.419 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:44.419 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:44.419 00:18:44.419 Run status group 0 (all jobs): 00:18:44.419 READ: bw=83.1KiB/s (85.1kB/s), 83.1KiB/s-83.1KiB/s (85.1kB/s-85.1kB/s), io=84.0KiB (86.0kB), run=1011-1011msec 00:18:44.419 WRITE: bw=2026KiB/s (2074kB/s), 2026KiB/s-2026KiB/s (2074kB/s-2074kB/s), io=2048KiB (2097kB), run=1011-1011msec 00:18:44.419 00:18:44.419 Disk stats (read/write): 00:18:44.419 nvme0n1: ios=43/512, merge=0/0, ticks=1710/88, in_queue=1798, util=98.70% 00:18:44.419 21:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:44.680 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:44.680 rmmod nvme_tcp 00:18:44.680 rmmod nvme_fabrics 00:18:44.680 rmmod nvme_keyring 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 904164 ']' 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 904164 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 904164 ']' 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 904164 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 904164 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 904164' 00:18:44.680 killing process with pid 904164 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 904164 00:18:44.680 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 904164 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:44.939 21:25:19 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.479 21:25:21 nvmf_tcp.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:47.479 00:18:47.479 real 0m9.812s 00:18:47.479 user 0m22.269s 00:18:47.479 sys 0m2.287s 00:18:47.479 21:25:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:47.479 21:25:21 nvmf_tcp.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:18:47.479 ************************************ 00:18:47.479 END TEST nvmf_nmic 00:18:47.479 ************************************ 00:18:47.479 21:25:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:18:47.480 21:25:21 nvmf_tcp -- nvmf/nvmf.sh@55 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:47.480 21:25:21 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:47.480 21:25:21 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:47.480 21:25:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:18:47.480 ************************************ 00:18:47.480 START TEST nvmf_fio_target 00:18:47.480 ************************************ 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:18:47.480 * Looking for test storage... 00:18:47.480 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:18:47.480 21:25:21 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:49.387 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:49.388 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:49.388 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:49.388 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:49.388 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:49.388 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:49.388 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:18:49.388 00:18:49.388 --- 10.0.0.2 ping statistics --- 00:18:49.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.388 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:49.388 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:49.388 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:18:49.388 00:18:49.388 --- 10.0.0.1 ping statistics --- 00:18:49.388 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:49.388 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=906870 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 906870 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 906870 ']' 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:49.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:49.388 21:25:23 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.388 [2024-07-11 21:25:24.017463] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:18:49.388 [2024-07-11 21:25:24.017544] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:49.388 EAL: No free 2048 kB hugepages reported on node 1 00:18:49.388 [2024-07-11 21:25:24.088274] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:49.647 [2024-07-11 21:25:24.188269] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:49.647 [2024-07-11 21:25:24.188337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:49.647 [2024-07-11 21:25:24.188354] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:49.647 [2024-07-11 21:25:24.188367] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:49.647 [2024-07-11 21:25:24.188379] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:49.647 [2024-07-11 21:25:24.188444] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:49.647 [2024-07-11 21:25:24.188499] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.647 [2024-07-11 21:25:24.188534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:49.647 [2024-07-11 21:25:24.188536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:49.647 21:25:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:18:49.905 [2024-07-11 21:25:24.589431] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:49.905 21:25:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.163 21:25:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:18:50.163 21:25:24 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.422 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:18:50.422 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.679 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:18:50.679 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:50.936 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:18:50.936 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:18:51.193 21:25:25 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.450 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:18:51.450 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.708 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:18:51.708 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:18:51.966 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:18:51.966 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:18:52.224 21:25:26 nvmf_tcp.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:18:52.482 21:25:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:52.482 21:25:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:52.740 21:25:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:18:52.740 21:25:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:52.998 21:25:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:53.256 [2024-07-11 21:25:27.891962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.256 21:25:27 nvmf_tcp.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:18:53.514 21:25:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:18:53.775 21:25:28 nvmf_tcp.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:54.344 21:25:29 nvmf_tcp.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:18:54.344 21:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:18:54.344 21:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:18:54.344 21:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:18:54.344 21:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:18:54.344 21:25:29 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:18:56.269 21:25:31 nvmf_tcp.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:18:56.528 [global] 00:18:56.528 thread=1 00:18:56.528 invalidate=1 00:18:56.528 rw=write 00:18:56.528 time_based=1 00:18:56.528 runtime=1 00:18:56.528 ioengine=libaio 00:18:56.528 direct=1 00:18:56.528 bs=4096 00:18:56.528 iodepth=1 00:18:56.528 norandommap=0 00:18:56.528 numjobs=1 00:18:56.528 00:18:56.528 verify_dump=1 00:18:56.528 verify_backlog=512 00:18:56.528 verify_state_save=0 00:18:56.528 do_verify=1 00:18:56.528 verify=crc32c-intel 00:18:56.528 [job0] 00:18:56.528 filename=/dev/nvme0n1 00:18:56.528 [job1] 00:18:56.528 filename=/dev/nvme0n2 00:18:56.528 [job2] 00:18:56.528 filename=/dev/nvme0n3 00:18:56.528 [job3] 00:18:56.528 filename=/dev/nvme0n4 00:18:56.528 Could not set queue depth (nvme0n1) 00:18:56.528 Could not set queue depth (nvme0n2) 00:18:56.528 Could not set queue depth (nvme0n3) 00:18:56.528 Could not set queue depth (nvme0n4) 00:18:56.528 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.528 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.528 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.528 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:56.528 fio-3.35 00:18:56.528 Starting 4 threads 00:18:57.903 00:18:57.903 job0: (groupid=0, jobs=1): err= 0: pid=907818: Thu Jul 11 21:25:32 2024 00:18:57.903 read: IOPS=20, BW=82.7KiB/s (84.7kB/s)(84.0KiB/1016msec) 00:18:57.903 slat (nsec): min=15040, max=33278, avg=21881.62, stdev=7982.49 00:18:57.903 clat (usec): min=40862, max=41056, avg=40965.08, stdev=43.97 00:18:57.903 lat (usec): min=40879, max=41072, avg=40986.96, stdev=42.44 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:57.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:57.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:57.903 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:57.903 | 99.99th=[41157] 00:18:57.903 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:18:57.903 slat (nsec): min=6653, max=61320, avg=18425.92, stdev=9276.81 00:18:57.903 clat (usec): min=158, max=713, avg=278.95, stdev=84.15 00:18:57.903 lat (usec): min=168, max=747, avg=297.38, stdev=82.63 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[ 167], 5.00th=[ 176], 10.00th=[ 186], 20.00th=[ 206], 00:18:57.903 | 30.00th=[ 235], 40.00th=[ 251], 50.00th=[ 262], 60.00th=[ 277], 00:18:57.903 | 70.00th=[ 310], 80.00th=[ 338], 90.00th=[ 383], 95.00th=[ 429], 00:18:57.903 | 99.00th=[ 553], 99.50th=[ 644], 99.90th=[ 717], 99.95th=[ 717], 00:18:57.903 | 99.99th=[ 717] 00:18:57.903 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:57.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:57.903 lat (usec) : 250=37.90%, 500=56.10%, 750=2.06% 00:18:57.903 lat (msec) : 50=3.94% 00:18:57.903 cpu : usr=0.10%, sys=1.28%, ctx=534, majf=0, minf=1 00:18:57.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.903 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.903 job1: (groupid=0, jobs=1): err= 0: pid=907820: Thu Jul 11 21:25:32 2024 00:18:57.903 read: IOPS=20, BW=81.4KiB/s (83.3kB/s)(84.0KiB/1032msec) 00:18:57.903 slat (nsec): min=9925, max=35979, avg=24430.33, stdev=9804.10 00:18:57.903 clat (usec): min=40668, max=41061, avg=40951.12, stdev=75.77 00:18:57.903 lat (usec): min=40677, max=41077, avg=40975.55, stdev=77.35 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:57.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:57.903 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:57.903 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:57.903 | 99.99th=[41157] 00:18:57.903 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:18:57.903 slat (usec): min=8, max=7890, avg=35.68, stdev=347.96 00:18:57.903 clat (usec): min=151, max=676, avg=292.64, stdev=84.46 00:18:57.903 lat (usec): min=162, max=8178, avg=328.33, stdev=358.36 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[ 157], 5.00th=[ 169], 10.00th=[ 206], 20.00th=[ 233], 00:18:57.903 | 30.00th=[ 251], 40.00th=[ 262], 50.00th=[ 273], 60.00th=[ 289], 00:18:57.903 | 70.00th=[ 306], 80.00th=[ 363], 90.00th=[ 412], 95.00th=[ 453], 00:18:57.903 | 99.00th=[ 537], 99.50th=[ 603], 99.90th=[ 676], 99.95th=[ 676], 00:18:57.903 | 99.99th=[ 676] 00:18:57.903 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:57.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:57.903 lat (usec) : 250=28.89%, 500=65.10%, 750=2.06% 00:18:57.903 lat (msec) : 50=3.94% 00:18:57.903 cpu : usr=0.19%, sys=1.75%, ctx=535, majf=0, minf=1 00:18:57.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.903 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.903 job2: (groupid=0, jobs=1): err= 0: pid=907821: Thu Jul 11 21:25:32 2024 00:18:57.903 read: IOPS=30, BW=123KiB/s (126kB/s)(128KiB/1040msec) 00:18:57.903 slat (nsec): min=10149, max=37722, avg=28606.81, stdev=8714.59 00:18:57.903 clat (usec): min=372, max=42069, avg=27318.51, stdev=19795.20 00:18:57.903 lat (usec): min=407, max=42103, avg=27347.11, stdev=19790.37 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[ 371], 5.00th=[ 383], 10.00th=[ 392], 20.00th=[ 408], 00:18:57.903 | 30.00th=[ 433], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:18:57.903 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:57.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:57.903 | 99.99th=[42206] 00:18:57.903 write: IOPS=492, BW=1969KiB/s (2016kB/s)(2048KiB/1040msec); 0 zone resets 00:18:57.903 slat (usec): min=8, max=709, avg=22.90, stdev=32.94 00:18:57.903 clat (usec): min=170, max=726, avg=292.42, stdev=107.23 00:18:57.903 lat (usec): min=183, max=1047, avg=315.32, stdev=118.33 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 208], 00:18:57.903 | 30.00th=[ 223], 40.00th=[ 237], 50.00th=[ 249], 60.00th=[ 269], 00:18:57.903 | 70.00th=[ 322], 80.00th=[ 383], 90.00th=[ 453], 95.00th=[ 519], 00:18:57.903 | 99.00th=[ 635], 99.50th=[ 685], 99.90th=[ 725], 99.95th=[ 725], 00:18:57.903 | 99.99th=[ 725] 00:18:57.903 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:57.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:57.903 lat (usec) : 250=47.61%, 500=43.20%, 750=5.33% 00:18:57.903 lat (msec) : 50=3.86% 00:18:57.903 cpu : usr=0.67%, sys=1.44%, ctx=546, majf=0, minf=2 00:18:57.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.903 issued rwts: total=32,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.903 job3: (groupid=0, jobs=1): err= 0: pid=907822: Thu Jul 11 21:25:32 2024 00:18:57.903 read: IOPS=117, BW=472KiB/s (483kB/s)(480KiB/1017msec) 00:18:57.903 slat (nsec): min=8246, max=35580, avg=14389.38, stdev=9271.25 00:18:57.903 clat (usec): min=253, max=41097, avg=7394.89, stdev=15491.12 00:18:57.903 lat (usec): min=262, max=41131, avg=7409.28, stdev=15495.64 00:18:57.903 clat percentiles (usec): 00:18:57.903 | 1.00th=[ 253], 5.00th=[ 260], 10.00th=[ 265], 20.00th=[ 269], 00:18:57.903 | 30.00th=[ 273], 40.00th=[ 277], 50.00th=[ 281], 60.00th=[ 293], 00:18:57.903 | 70.00th=[ 310], 80.00th=[ 383], 90.00th=[41157], 95.00th=[41157], 00:18:57.903 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:57.903 | 99.99th=[41157] 00:18:57.903 write: IOPS=503, BW=2014KiB/s (2062kB/s)(2048KiB/1017msec); 0 zone resets 00:18:57.903 slat (nsec): min=6993, max=51345, avg=19424.16, stdev=9063.39 00:18:57.903 clat (usec): min=160, max=811, avg=223.48, stdev=59.59 00:18:57.903 lat (usec): min=169, max=834, avg=242.90, stdev=61.46 00:18:57.903 clat percentiles (usec): 00:18:57.904 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 182], 20.00th=[ 192], 00:18:57.904 | 30.00th=[ 202], 40.00th=[ 210], 50.00th=[ 215], 60.00th=[ 223], 00:18:57.904 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 253], 95.00th=[ 289], 00:18:57.904 | 99.00th=[ 469], 99.50th=[ 742], 99.90th=[ 816], 99.95th=[ 816], 00:18:57.904 | 99.99th=[ 816] 00:18:57.904 bw ( KiB/s): min= 4096, max= 4096, per=52.00%, avg=4096.00, stdev= 0.00, samples=1 00:18:57.904 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:57.904 lat (usec) : 250=70.57%, 500=25.32%, 750=0.47%, 1000=0.32% 00:18:57.904 lat (msec) : 50=3.32% 00:18:57.904 cpu : usr=0.98%, sys=0.98%, ctx=635, majf=0, minf=1 00:18:57.904 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:57.904 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.904 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:57.904 issued rwts: total=120,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:57.904 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:57.904 00:18:57.904 Run status group 0 (all jobs): 00:18:57.904 READ: bw=746KiB/s (764kB/s), 81.4KiB/s-472KiB/s (83.3kB/s-483kB/s), io=776KiB (795kB), run=1016-1040msec 00:18:57.904 WRITE: bw=7877KiB/s (8066kB/s), 1969KiB/s-2016KiB/s (2016kB/s-2064kB/s), io=8192KiB (8389kB), run=1016-1040msec 00:18:57.904 00:18:57.904 Disk stats (read/write): 00:18:57.904 nvme0n1: ios=62/512, merge=0/0, ticks=1129/140, in_queue=1269, util=85.97% 00:18:57.904 nvme0n2: ios=58/512, merge=0/0, ticks=798/141, in_queue=939, util=90.64% 00:18:57.904 nvme0n3: ios=84/512, merge=0/0, ticks=1228/136, in_queue=1364, util=93.52% 00:18:57.904 nvme0n4: ios=169/512, merge=0/0, ticks=883/110, in_queue=993, util=94.74% 00:18:57.904 21:25:32 nvmf_tcp.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:18:57.904 [global] 00:18:57.904 thread=1 00:18:57.904 invalidate=1 00:18:57.904 rw=randwrite 00:18:57.904 time_based=1 00:18:57.904 runtime=1 00:18:57.904 ioengine=libaio 00:18:57.904 direct=1 00:18:57.904 bs=4096 00:18:57.904 iodepth=1 00:18:57.904 norandommap=0 00:18:57.904 numjobs=1 00:18:57.904 00:18:57.904 verify_dump=1 00:18:57.904 verify_backlog=512 00:18:57.904 verify_state_save=0 00:18:57.904 do_verify=1 00:18:57.904 verify=crc32c-intel 00:18:57.904 [job0] 00:18:57.904 filename=/dev/nvme0n1 00:18:57.904 [job1] 00:18:57.904 filename=/dev/nvme0n2 00:18:57.904 [job2] 00:18:57.904 filename=/dev/nvme0n3 00:18:57.904 [job3] 00:18:57.904 filename=/dev/nvme0n4 00:18:57.904 Could not set queue depth (nvme0n1) 00:18:57.904 Could not set queue depth (nvme0n2) 00:18:57.904 Could not set queue depth (nvme0n3) 00:18:57.904 Could not set queue depth (nvme0n4) 00:18:58.161 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.161 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.161 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.161 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:18:58.161 fio-3.35 00:18:58.161 Starting 4 threads 00:18:59.534 00:18:59.534 job0: (groupid=0, jobs=1): err= 0: pid=908065: Thu Jul 11 21:25:33 2024 00:18:59.534 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:18:59.534 slat (nsec): min=6318, max=50848, avg=16861.37, stdev=2978.92 00:18:59.534 clat (usec): min=212, max=41027, avg=353.42, stdev=1465.58 00:18:59.534 lat (usec): min=218, max=41042, avg=370.28, stdev=1465.57 00:18:59.534 clat percentiles (usec): 00:18:59.534 | 1.00th=[ 227], 5.00th=[ 245], 10.00th=[ 289], 20.00th=[ 293], 00:18:59.534 | 30.00th=[ 297], 40.00th=[ 302], 50.00th=[ 302], 60.00th=[ 306], 00:18:59.534 | 70.00th=[ 310], 80.00th=[ 314], 90.00th=[ 318], 95.00th=[ 322], 00:18:59.534 | 99.00th=[ 330], 99.50th=[ 359], 99.90th=[40633], 99.95th=[41157], 00:18:59.534 | 99.99th=[41157] 00:18:59.534 write: IOPS=1897, BW=7588KiB/s (7771kB/s)(7596KiB/1001msec); 0 zone resets 00:18:59.534 slat (nsec): min=6595, max=54968, avg=17888.56, stdev=6086.55 00:18:59.534 clat (usec): min=158, max=737, avg=199.86, stdev=28.14 00:18:59.534 lat (usec): min=169, max=746, avg=217.75, stdev=29.84 00:18:59.534 clat percentiles (usec): 00:18:59.534 | 1.00th=[ 165], 5.00th=[ 172], 10.00th=[ 178], 20.00th=[ 188], 00:18:59.534 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 198], 60.00th=[ 200], 00:18:59.534 | 70.00th=[ 204], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 239], 00:18:59.534 | 99.00th=[ 265], 99.50th=[ 306], 99.90th=[ 709], 99.95th=[ 734], 00:18:59.534 | 99.99th=[ 734] 00:18:59.534 bw ( KiB/s): min= 8192, max= 8192, per=60.58%, avg=8192.00, stdev= 0.00, samples=1 00:18:59.534 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:18:59.534 lat (usec) : 250=56.51%, 500=43.35%, 750=0.09% 00:18:59.534 lat (msec) : 50=0.06% 00:18:59.534 cpu : usr=3.90%, sys=8.60%, ctx=3436, majf=0, minf=1 00:18:59.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 issued rwts: total=1536,1899,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.535 job1: (groupid=0, jobs=1): err= 0: pid=908080: Thu Jul 11 21:25:33 2024 00:18:59.535 read: IOPS=21, BW=87.7KiB/s (89.8kB/s)(88.0KiB/1003msec) 00:18:59.535 slat (nsec): min=8232, max=36626, avg=28420.45, stdev=8056.32 00:18:59.535 clat (usec): min=10325, max=41153, avg=39575.58, stdev=6533.27 00:18:59.535 lat (usec): min=10343, max=41161, avg=39604.00, stdev=6535.61 00:18:59.535 clat percentiles (usec): 00:18:59.535 | 1.00th=[10290], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:18:59.535 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:18:59.535 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:18:59.535 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:59.535 | 99.99th=[41157] 00:18:59.535 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:18:59.535 slat (nsec): min=6167, max=48619, avg=15480.91, stdev=9090.39 00:18:59.535 clat (usec): min=146, max=2514, avg=237.55, stdev=114.52 00:18:59.535 lat (usec): min=154, max=2522, avg=253.03, stdev=114.98 00:18:59.535 clat percentiles (usec): 00:18:59.535 | 1.00th=[ 155], 5.00th=[ 167], 10.00th=[ 176], 20.00th=[ 200], 00:18:59.535 | 30.00th=[ 210], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 231], 00:18:59.535 | 70.00th=[ 241], 80.00th=[ 249], 90.00th=[ 302], 95.00th=[ 371], 00:18:59.535 | 99.00th=[ 433], 99.50th=[ 453], 99.90th=[ 2507], 99.95th=[ 2507], 00:18:59.535 | 99.99th=[ 2507] 00:18:59.535 bw ( KiB/s): min= 4096, max= 4096, per=30.29%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.535 lat (usec) : 250=77.53%, 500=18.16% 00:18:59.535 lat (msec) : 4=0.19%, 20=0.19%, 50=3.93% 00:18:59.535 cpu : usr=0.60%, sys=0.60%, ctx=536, majf=0, minf=1 00:18:59.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.535 job2: (groupid=0, jobs=1): err= 0: pid=908117: Thu Jul 11 21:25:33 2024 00:18:59.535 read: IOPS=21, BW=87.9KiB/s (90.0kB/s)(88.0KiB/1001msec) 00:18:59.535 slat (nsec): min=11904, max=35370, avg=30105.09, stdev=8827.32 00:18:59.535 clat (usec): min=455, max=42331, avg=39885.55, stdev=8819.00 00:18:59.535 lat (usec): min=470, max=42346, avg=39915.65, stdev=8822.56 00:18:59.535 clat percentiles (usec): 00:18:59.535 | 1.00th=[ 457], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:18:59.535 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:18:59.535 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:18:59.535 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:18:59.535 | 99.99th=[42206] 00:18:59.535 write: IOPS=511, BW=2046KiB/s (2095kB/s)(2048KiB/1001msec); 0 zone resets 00:18:59.535 slat (nsec): min=6808, max=47723, avg=12955.72, stdev=5695.22 00:18:59.535 clat (usec): min=171, max=380, avg=222.02, stdev=28.14 00:18:59.535 lat (usec): min=179, max=399, avg=234.97, stdev=29.17 00:18:59.535 clat percentiles (usec): 00:18:59.535 | 1.00th=[ 178], 5.00th=[ 186], 10.00th=[ 192], 20.00th=[ 202], 00:18:59.535 | 30.00th=[ 208], 40.00th=[ 212], 50.00th=[ 219], 60.00th=[ 225], 00:18:59.535 | 70.00th=[ 231], 80.00th=[ 237], 90.00th=[ 251], 95.00th=[ 269], 00:18:59.535 | 99.00th=[ 322], 99.50th=[ 363], 99.90th=[ 379], 99.95th=[ 379], 00:18:59.535 | 99.99th=[ 379] 00:18:59.535 bw ( KiB/s): min= 4096, max= 4096, per=30.29%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.535 lat (usec) : 250=86.14%, 500=9.93% 00:18:59.535 lat (msec) : 50=3.93% 00:18:59.535 cpu : usr=0.20%, sys=0.80%, ctx=535, majf=0, minf=2 00:18:59.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.535 job3: (groupid=0, jobs=1): err= 0: pid=908128: Thu Jul 11 21:25:33 2024 00:18:59.535 read: IOPS=135, BW=543KiB/s (556kB/s)(552KiB/1016msec) 00:18:59.535 slat (nsec): min=5907, max=37685, avg=14380.30, stdev=9723.50 00:18:59.535 clat (usec): min=224, max=41246, avg=6161.53, stdev=14381.88 00:18:59.535 lat (usec): min=230, max=41253, avg=6175.91, stdev=14389.10 00:18:59.535 clat percentiles (usec): 00:18:59.535 | 1.00th=[ 227], 5.00th=[ 233], 10.00th=[ 235], 20.00th=[ 241], 00:18:59.535 | 30.00th=[ 253], 40.00th=[ 262], 50.00th=[ 269], 60.00th=[ 277], 00:18:59.535 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[41157], 95.00th=[41157], 00:18:59.535 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:18:59.535 | 99.99th=[41157] 00:18:59.535 write: IOPS=503, BW=2016KiB/s (2064kB/s)(2048KiB/1016msec); 0 zone resets 00:18:59.535 slat (nsec): min=7865, max=55921, avg=17731.43, stdev=10165.67 00:18:59.535 clat (usec): min=156, max=853, avg=294.65, stdev=91.04 00:18:59.535 lat (usec): min=164, max=868, avg=312.38, stdev=96.22 00:18:59.535 clat percentiles (usec): 00:18:59.535 | 1.00th=[ 159], 5.00th=[ 163], 10.00th=[ 174], 20.00th=[ 200], 00:18:59.535 | 30.00th=[ 233], 40.00th=[ 277], 50.00th=[ 297], 60.00th=[ 322], 00:18:59.535 | 70.00th=[ 347], 80.00th=[ 367], 90.00th=[ 408], 95.00th=[ 424], 00:18:59.535 | 99.00th=[ 478], 99.50th=[ 494], 99.90th=[ 857], 99.95th=[ 857], 00:18:59.535 | 99.99th=[ 857] 00:18:59.535 bw ( KiB/s): min= 4096, max= 4096, per=30.29%, avg=4096.00, stdev= 0.00, samples=1 00:18:59.535 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:18:59.535 lat (usec) : 250=31.38%, 500=65.23%, 1000=0.31% 00:18:59.535 lat (msec) : 50=3.08% 00:18:59.535 cpu : usr=0.59%, sys=1.58%, ctx=651, majf=0, minf=1 00:18:59.535 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:59.535 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.535 issued rwts: total=138,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.535 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:59.535 00:18:59.535 Run status group 0 (all jobs): 00:18:59.535 READ: bw=6764KiB/s (6926kB/s), 87.7KiB/s-6138KiB/s (89.8kB/s-6285kB/s), io=6872KiB (7037kB), run=1001-1016msec 00:18:59.535 WRITE: bw=13.2MiB/s (13.8MB/s), 2016KiB/s-7588KiB/s (2064kB/s-7771kB/s), io=13.4MiB (14.1MB), run=1001-1016msec 00:18:59.535 00:18:59.535 Disk stats (read/write): 00:18:59.535 nvme0n1: ios=1330/1536, merge=0/0, ticks=696/313, in_queue=1009, util=96.39% 00:18:59.535 nvme0n2: ios=41/512, merge=0/0, ticks=1700/120, in_queue=1820, util=97.05% 00:18:59.535 nvme0n3: ios=41/512, merge=0/0, ticks=1655/114, in_queue=1769, util=97.48% 00:18:59.535 nvme0n4: ios=156/512, merge=0/0, ticks=1587/144, in_queue=1731, util=97.35% 00:18:59.535 21:25:33 nvmf_tcp.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:18:59.535 [global] 00:18:59.535 thread=1 00:18:59.535 invalidate=1 00:18:59.535 rw=write 00:18:59.535 time_based=1 00:18:59.535 runtime=1 00:18:59.535 ioengine=libaio 00:18:59.535 direct=1 00:18:59.535 bs=4096 00:18:59.535 iodepth=128 00:18:59.535 norandommap=0 00:18:59.535 numjobs=1 00:18:59.535 00:18:59.535 verify_dump=1 00:18:59.535 verify_backlog=512 00:18:59.535 verify_state_save=0 00:18:59.535 do_verify=1 00:18:59.535 verify=crc32c-intel 00:18:59.535 [job0] 00:18:59.535 filename=/dev/nvme0n1 00:18:59.535 [job1] 00:18:59.535 filename=/dev/nvme0n2 00:18:59.535 [job2] 00:18:59.535 filename=/dev/nvme0n3 00:18:59.535 [job3] 00:18:59.536 filename=/dev/nvme0n4 00:18:59.536 Could not set queue depth (nvme0n1) 00:18:59.536 Could not set queue depth (nvme0n2) 00:18:59.536 Could not set queue depth (nvme0n3) 00:18:59.536 Could not set queue depth (nvme0n4) 00:18:59.536 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.536 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.536 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.536 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:18:59.536 fio-3.35 00:18:59.536 Starting 4 threads 00:19:00.911 00:19:00.911 job0: (groupid=0, jobs=1): err= 0: pid=908402: Thu Jul 11 21:25:35 2024 00:19:00.911 read: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec) 00:19:00.911 slat (usec): min=2, max=17683, avg=97.72, stdev=674.18 00:19:00.911 clat (usec): min=1031, max=37417, avg=12420.67, stdev=4001.37 00:19:00.911 lat (usec): min=1202, max=37428, avg=12518.39, stdev=4035.25 00:19:00.911 clat percentiles (usec): 00:19:00.911 | 1.00th=[ 4113], 5.00th=[ 6849], 10.00th=[ 8356], 20.00th=[10159], 00:19:00.911 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11863], 60.00th=[12387], 00:19:00.911 | 70.00th=[13435], 80.00th=[14353], 90.00th=[17695], 95.00th=[19792], 00:19:00.911 | 99.00th=[25822], 99.50th=[28443], 99.90th=[33162], 99.95th=[37487], 00:19:00.911 | 99.99th=[37487] 00:19:00.911 write: IOPS=4258, BW=16.6MiB/s (17.4MB/s)(16.7MiB/1003msec); 0 zone resets 00:19:00.911 slat (usec): min=3, max=24250, avg=125.74, stdev=912.08 00:19:00.911 clat (usec): min=416, max=120550, avg=16269.49, stdev=17638.92 00:19:00.911 lat (usec): min=1763, max=120568, avg=16395.23, stdev=17766.40 00:19:00.911 clat percentiles (msec): 00:19:00.911 | 1.00th=[ 4], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 10], 00:19:00.911 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:19:00.911 | 70.00th=[ 13], 80.00th=[ 16], 90.00th=[ 23], 95.00th=[ 40], 00:19:00.911 | 99.00th=[ 111], 99.50th=[ 116], 99.90th=[ 121], 99.95th=[ 121], 00:19:00.911 | 99.99th=[ 121] 00:19:00.911 bw ( KiB/s): min=14808, max=18336, per=23.59%, avg=16572.00, stdev=2494.67, samples=2 00:19:00.911 iops : min= 3702, max= 4584, avg=4143.00, stdev=623.67, samples=2 00:19:00.911 lat (usec) : 500=0.01% 00:19:00.911 lat (msec) : 2=0.14%, 4=0.86%, 10=18.32%, 20=71.14%, 50=7.43% 00:19:00.911 lat (msec) : 100=0.98%, 250=1.11% 00:19:00.911 cpu : usr=3.19%, sys=6.19%, ctx=365, majf=0, minf=1 00:19:00.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:00.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.911 issued rwts: total=4096,4271,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.911 job1: (groupid=0, jobs=1): err= 0: pid=908403: Thu Jul 11 21:25:35 2024 00:19:00.911 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:19:00.911 slat (usec): min=2, max=11223, avg=95.06, stdev=555.76 00:19:00.911 clat (usec): min=5917, max=30671, avg=12306.91, stdev=2199.68 00:19:00.911 lat (usec): min=5927, max=30675, avg=12401.97, stdev=2256.46 00:19:00.911 clat percentiles (usec): 00:19:00.911 | 1.00th=[ 8455], 5.00th=[10028], 10.00th=[10552], 20.00th=[10814], 00:19:00.911 | 30.00th=[11207], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:19:00.911 | 70.00th=[12649], 80.00th=[13173], 90.00th=[15008], 95.00th=[16909], 00:19:00.911 | 99.00th=[20579], 99.50th=[20579], 99.90th=[24249], 99.95th=[30540], 00:19:00.911 | 99.99th=[30802] 00:19:00.911 write: IOPS=5321, BW=20.8MiB/s (21.8MB/s)(20.8MiB/1002msec); 0 zone resets 00:19:00.911 slat (usec): min=3, max=6430, avg=87.35, stdev=485.95 00:19:00.911 clat (usec): min=207, max=30644, avg=11942.77, stdev=2709.73 00:19:00.911 lat (usec): min=3703, max=30656, avg=12030.11, stdev=2729.28 00:19:00.911 clat percentiles (usec): 00:19:00.911 | 1.00th=[ 5080], 5.00th=[ 8455], 10.00th=[10028], 20.00th=[10552], 00:19:00.911 | 30.00th=[10814], 40.00th=[11207], 50.00th=[11469], 60.00th=[11863], 00:19:00.911 | 70.00th=[12256], 80.00th=[12649], 90.00th=[15008], 95.00th=[17957], 00:19:00.911 | 99.00th=[20841], 99.50th=[21627], 99.90th=[26346], 99.95th=[26346], 00:19:00.911 | 99.99th=[30540] 00:19:00.911 bw ( KiB/s): min=20480, max=21152, per=29.63%, avg=20816.00, stdev=475.18, samples=2 00:19:00.911 iops : min= 5120, max= 5288, avg=5204.00, stdev=118.79, samples=2 00:19:00.911 lat (usec) : 250=0.01% 00:19:00.911 lat (msec) : 4=0.25%, 10=7.11%, 20=90.55%, 50=2.09% 00:19:00.911 cpu : usr=4.70%, sys=11.09%, ctx=413, majf=0, minf=1 00:19:00.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:19:00.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.911 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.911 issued rwts: total=5120,5332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.911 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.911 job2: (groupid=0, jobs=1): err= 0: pid=908405: Thu Jul 11 21:25:35 2024 00:19:00.911 read: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec) 00:19:00.911 slat (usec): min=3, max=7492, avg=109.46, stdev=591.33 00:19:00.911 clat (usec): min=6068, max=31790, avg=14637.25, stdev=2903.23 00:19:00.911 lat (usec): min=6073, max=31798, avg=14746.71, stdev=2925.49 00:19:00.911 clat percentiles (usec): 00:19:00.911 | 1.00th=[ 8848], 5.00th=[10421], 10.00th=[11600], 20.00th=[12387], 00:19:00.911 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14222], 60.00th=[15008], 00:19:00.911 | 70.00th=[16319], 80.00th=[16909], 90.00th=[18220], 95.00th=[19268], 00:19:00.911 | 99.00th=[22152], 99.50th=[22414], 99.90th=[31851], 99.95th=[31851], 00:19:00.911 | 99.99th=[31851] 00:19:00.911 write: IOPS=4118, BW=16.1MiB/s (16.9MB/s)(16.1MiB/1001msec); 0 zone resets 00:19:00.911 slat (usec): min=4, max=23545, avg=123.41, stdev=846.42 00:19:00.911 clat (usec): min=892, max=59243, avg=16190.46, stdev=7417.22 00:19:00.911 lat (usec): min=900, max=59267, avg=16313.88, stdev=7491.94 00:19:00.911 clat percentiles (usec): 00:19:00.911 | 1.00th=[ 9241], 5.00th=[10159], 10.00th=[11731], 20.00th=[12125], 00:19:00.911 | 30.00th=[12518], 40.00th=[12911], 50.00th=[13698], 60.00th=[14877], 00:19:00.911 | 70.00th=[15664], 80.00th=[16909], 90.00th=[26084], 95.00th=[35914], 00:19:00.911 | 99.00th=[45876], 99.50th=[45876], 99.90th=[47973], 99.95th=[49546], 00:19:00.911 | 99.99th=[58983] 00:19:00.911 bw ( KiB/s): min=18752, max=18752, per=26.69%, avg=18752.00, stdev= 0.00, samples=1 00:19:00.911 iops : min= 4688, max= 4688, avg=4688.00, stdev= 0.00, samples=1 00:19:00.911 lat (usec) : 1000=0.06% 00:19:00.911 lat (msec) : 4=0.27%, 10=3.41%, 20=89.03%, 50=7.21%, 100=0.02% 00:19:00.911 cpu : usr=5.80%, sys=8.10%, ctx=324, majf=0, minf=1 00:19:00.911 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:00.911 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.912 issued rwts: total=4096,4123,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.912 job3: (groupid=0, jobs=1): err= 0: pid=908406: Thu Jul 11 21:25:35 2024 00:19:00.912 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:19:00.912 slat (usec): min=2, max=26262, avg=144.88, stdev=1144.65 00:19:00.912 clat (usec): min=5324, max=80132, avg=18797.28, stdev=12847.16 00:19:00.912 lat (usec): min=5342, max=80138, avg=18942.16, stdev=12927.80 00:19:00.912 clat percentiles (usec): 00:19:00.912 | 1.00th=[ 6128], 5.00th=[ 9634], 10.00th=[11469], 20.00th=[12649], 00:19:00.912 | 30.00th=[12911], 40.00th=[13435], 50.00th=[14353], 60.00th=[15270], 00:19:00.912 | 70.00th=[16581], 80.00th=[20055], 90.00th=[39060], 95.00th=[46400], 00:19:00.912 | 99.00th=[78119], 99.50th=[80217], 99.90th=[80217], 99.95th=[80217], 00:19:00.912 | 99.99th=[80217] 00:19:00.912 write: IOPS=3974, BW=15.5MiB/s (16.3MB/s)(15.7MiB/1010msec); 0 zone resets 00:19:00.912 slat (usec): min=3, max=11866, avg=110.56, stdev=616.21 00:19:00.912 clat (usec): min=511, max=53527, avg=15094.73, stdev=8062.10 00:19:00.912 lat (usec): min=1214, max=53537, avg=15205.28, stdev=8135.26 00:19:00.912 clat percentiles (usec): 00:19:00.912 | 1.00th=[ 5407], 5.00th=[ 7898], 10.00th=[ 9110], 20.00th=[11469], 00:19:00.912 | 30.00th=[12518], 40.00th=[12780], 50.00th=[13042], 60.00th=[13304], 00:19:00.912 | 70.00th=[13566], 80.00th=[15795], 90.00th=[21890], 95.00th=[28705], 00:19:00.912 | 99.00th=[50594], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:19:00.912 | 99.99th=[53740] 00:19:00.912 bw ( KiB/s): min=12288, max=18808, per=22.13%, avg=15548.00, stdev=4610.34, samples=2 00:19:00.912 iops : min= 3072, max= 4702, avg=3887.00, stdev=1152.58, samples=2 00:19:00.912 lat (usec) : 750=0.01% 00:19:00.912 lat (msec) : 10=9.90%, 20=72.32%, 50=15.20%, 100=2.57% 00:19:00.912 cpu : usr=4.86%, sys=5.65%, ctx=412, majf=0, minf=1 00:19:00.912 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:00.912 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:00.912 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:00.912 issued rwts: total=3584,4014,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:00.912 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:00.912 00:19:00.912 Run status group 0 (all jobs): 00:19:00.912 READ: bw=65.3MiB/s (68.5MB/s), 13.9MiB/s-20.0MiB/s (14.5MB/s-20.9MB/s), io=66.0MiB (69.2MB), run=1001-1010msec 00:19:00.912 WRITE: bw=68.6MiB/s (71.9MB/s), 15.5MiB/s-20.8MiB/s (16.3MB/s-21.8MB/s), io=69.3MiB (72.7MB), run=1001-1010msec 00:19:00.912 00:19:00.912 Disk stats (read/write): 00:19:00.912 nvme0n1: ios=3146/3584, merge=0/0, ticks=27221/31095, in_queue=58316, util=87.58% 00:19:00.912 nvme0n2: ios=4282/4608, merge=0/0, ticks=20575/20565, in_queue=41140, util=89.43% 00:19:00.912 nvme0n3: ios=3516/3584, merge=0/0, ticks=16775/19223, in_queue=35998, util=95.19% 00:19:00.912 nvme0n4: ios=2972/3072, merge=0/0, ticks=38849/32641, in_queue=71490, util=95.36% 00:19:00.912 21:25:35 nvmf_tcp.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:19:00.912 [global] 00:19:00.912 thread=1 00:19:00.912 invalidate=1 00:19:00.912 rw=randwrite 00:19:00.912 time_based=1 00:19:00.912 runtime=1 00:19:00.912 ioengine=libaio 00:19:00.912 direct=1 00:19:00.912 bs=4096 00:19:00.912 iodepth=128 00:19:00.912 norandommap=0 00:19:00.912 numjobs=1 00:19:00.912 00:19:00.912 verify_dump=1 00:19:00.912 verify_backlog=512 00:19:00.912 verify_state_save=0 00:19:00.912 do_verify=1 00:19:00.912 verify=crc32c-intel 00:19:00.912 [job0] 00:19:00.912 filename=/dev/nvme0n1 00:19:00.912 [job1] 00:19:00.912 filename=/dev/nvme0n2 00:19:00.912 [job2] 00:19:00.912 filename=/dev/nvme0n3 00:19:00.912 [job3] 00:19:00.912 filename=/dev/nvme0n4 00:19:00.912 Could not set queue depth (nvme0n1) 00:19:00.912 Could not set queue depth (nvme0n2) 00:19:00.912 Could not set queue depth (nvme0n3) 00:19:00.912 Could not set queue depth (nvme0n4) 00:19:00.912 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.912 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.912 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.912 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:19:00.912 fio-3.35 00:19:00.912 Starting 4 threads 00:19:02.289 00:19:02.289 job0: (groupid=0, jobs=1): err= 0: pid=908632: Thu Jul 11 21:25:36 2024 00:19:02.289 read: IOPS=3810, BW=14.9MiB/s (15.6MB/s)(15.0MiB/1007msec) 00:19:02.289 slat (usec): min=2, max=11748, avg=122.68, stdev=871.49 00:19:02.289 clat (usec): min=4301, max=35737, avg=15539.90, stdev=4557.80 00:19:02.289 lat (usec): min=5131, max=35742, avg=15662.58, stdev=4626.01 00:19:02.289 clat percentiles (usec): 00:19:02.289 | 1.00th=[ 8848], 5.00th=[10028], 10.00th=[11207], 20.00th=[12387], 00:19:02.289 | 30.00th=[13173], 40.00th=[13435], 50.00th=[13698], 60.00th=[14877], 00:19:02.289 | 70.00th=[16712], 80.00th=[19530], 90.00th=[22414], 95.00th=[24249], 00:19:02.289 | 99.00th=[28705], 99.50th=[29754], 99.90th=[35914], 99.95th=[35914], 00:19:02.289 | 99.99th=[35914] 00:19:02.289 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:19:02.289 slat (usec): min=3, max=26974, avg=115.71, stdev=954.05 00:19:02.289 clat (usec): min=1489, max=63426, avg=16570.54, stdev=8600.34 00:19:02.289 lat (usec): min=1564, max=63443, avg=16686.25, stdev=8692.08 00:19:02.289 clat percentiles (usec): 00:19:02.289 | 1.00th=[ 5407], 5.00th=[ 8356], 10.00th=[ 9896], 20.00th=[10683], 00:19:02.289 | 30.00th=[11207], 40.00th=[11863], 50.00th=[13173], 60.00th=[14484], 00:19:02.289 | 70.00th=[18220], 80.00th=[24249], 90.00th=[28967], 95.00th=[38536], 00:19:02.289 | 99.00th=[43254], 99.50th=[44303], 99.90th=[47449], 99.95th=[49546], 00:19:02.289 | 99.99th=[63177] 00:19:02.289 bw ( KiB/s): min=12720, max=20048, per=23.94%, avg=16384.00, stdev=5181.68, samples=2 00:19:02.289 iops : min= 3180, max= 5012, avg=4096.00, stdev=1295.42, samples=2 00:19:02.289 lat (msec) : 2=0.04%, 4=0.03%, 10=8.55%, 20=69.87%, 50=21.49% 00:19:02.289 lat (msec) : 100=0.03% 00:19:02.289 cpu : usr=2.88%, sys=4.87%, ctx=250, majf=0, minf=1 00:19:02.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:02.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.289 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.289 job1: (groupid=0, jobs=1): err= 0: pid=908633: Thu Jul 11 21:25:36 2024 00:19:02.289 read: IOPS=5109, BW=20.0MiB/s (20.9MB/s)(20.0MiB/1002msec) 00:19:02.289 slat (usec): min=2, max=17049, avg=93.91, stdev=546.13 00:19:02.289 clat (usec): min=7350, max=49618, avg=12456.55, stdev=6445.99 00:19:02.289 lat (usec): min=7665, max=49668, avg=12550.45, stdev=6480.58 00:19:02.289 clat percentiles (usec): 00:19:02.289 | 1.00th=[ 8160], 5.00th=[ 8848], 10.00th=[ 9372], 20.00th=[10028], 00:19:02.289 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[10945], 00:19:02.289 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12780], 95.00th=[32900], 00:19:02.289 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43779], 99.95th=[43779], 00:19:02.289 | 99.99th=[49546] 00:19:02.289 write: IOPS=5614, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:19:02.289 slat (usec): min=3, max=7460, avg=83.51, stdev=339.96 00:19:02.289 clat (usec): min=385, max=34289, avg=11165.83, stdev=2882.38 00:19:02.289 lat (usec): min=2799, max=34310, avg=11249.34, stdev=2905.14 00:19:02.289 clat percentiles (usec): 00:19:02.289 | 1.00th=[ 6849], 5.00th=[ 8586], 10.00th=[10028], 20.00th=[10421], 00:19:02.289 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[10945], 00:19:02.289 | 70.00th=[11076], 80.00th=[11207], 90.00th=[12387], 95.00th=[13829], 00:19:02.289 | 99.00th=[29754], 99.50th=[31851], 99.90th=[33817], 99.95th=[34341], 00:19:02.289 | 99.99th=[34341] 00:19:02.289 bw ( KiB/s): min=24576, max=24576, per=35.91%, avg=24576.00, stdev= 0.00, samples=1 00:19:02.289 iops : min= 6144, max= 6144, avg=6144.00, stdev= 0.00, samples=1 00:19:02.289 lat (usec) : 500=0.01% 00:19:02.289 lat (msec) : 4=0.43%, 10=13.30%, 20=81.84%, 50=4.42% 00:19:02.289 cpu : usr=6.59%, sys=9.59%, ctx=711, majf=0, minf=1 00:19:02.289 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:19:02.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.289 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.289 issued rwts: total=5120,5626,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.289 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.289 job2: (groupid=0, jobs=1): err= 0: pid=908634: Thu Jul 11 21:25:36 2024 00:19:02.289 read: IOPS=3548, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1010msec) 00:19:02.289 slat (usec): min=2, max=14123, avg=113.13, stdev=771.71 00:19:02.289 clat (usec): min=4524, max=41365, avg=15059.29, stdev=6080.16 00:19:02.289 lat (usec): min=4537, max=44554, avg=15172.42, stdev=6141.75 00:19:02.289 clat percentiles (usec): 00:19:02.289 | 1.00th=[ 6587], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11600], 00:19:02.289 | 30.00th=[11731], 40.00th=[12125], 50.00th=[13173], 60.00th=[13698], 00:19:02.289 | 70.00th=[14484], 80.00th=[16909], 90.00th=[24511], 95.00th=[30278], 00:19:02.290 | 99.00th=[35914], 99.50th=[36963], 99.90th=[40109], 99.95th=[40109], 00:19:02.290 | 99.99th=[41157] 00:19:02.290 write: IOPS=3935, BW=15.4MiB/s (16.1MB/s)(15.5MiB/1010msec); 0 zone resets 00:19:02.290 slat (usec): min=3, max=13420, avg=139.25, stdev=867.50 00:19:02.290 clat (usec): min=2627, max=94461, avg=18631.39, stdev=16037.80 00:19:02.290 lat (usec): min=2633, max=94474, avg=18770.64, stdev=16141.34 00:19:02.290 clat percentiles (usec): 00:19:02.290 | 1.00th=[ 4752], 5.00th=[ 8586], 10.00th=[10290], 20.00th=[11600], 00:19:02.290 | 30.00th=[12649], 40.00th=[13304], 50.00th=[13566], 60.00th=[13829], 00:19:02.290 | 70.00th=[14746], 80.00th=[21627], 90.00th=[28443], 95.00th=[63701], 00:19:02.290 | 99.00th=[89654], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:19:02.290 | 99.99th=[94897] 00:19:02.290 bw ( KiB/s): min=14344, max=16432, per=22.48%, avg=15388.00, stdev=1476.44, samples=2 00:19:02.290 iops : min= 3586, max= 4108, avg=3847.00, stdev=369.11, samples=2 00:19:02.290 lat (msec) : 4=0.37%, 10=6.57%, 20=75.04%, 50=14.76%, 100=3.25% 00:19:02.290 cpu : usr=4.06%, sys=5.55%, ctx=354, majf=0, minf=1 00:19:02.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:19:02.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.290 issued rwts: total=3584,3975,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.290 job3: (groupid=0, jobs=1): err= 0: pid=908635: Thu Jul 11 21:25:36 2024 00:19:02.290 read: IOPS=3182, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1005msec) 00:19:02.290 slat (usec): min=3, max=22076, avg=144.61, stdev=914.67 00:19:02.290 clat (usec): min=784, max=68952, avg=17501.06, stdev=8259.71 00:19:02.290 lat (usec): min=4008, max=68993, avg=17645.66, stdev=8354.32 00:19:02.290 clat percentiles (usec): 00:19:02.290 | 1.00th=[ 4621], 5.00th=[10683], 10.00th=[11994], 20.00th=[12256], 00:19:02.290 | 30.00th=[12518], 40.00th=[12649], 50.00th=[13173], 60.00th=[16581], 00:19:02.290 | 70.00th=[20055], 80.00th=[23987], 90.00th=[26870], 95.00th=[32900], 00:19:02.290 | 99.00th=[46924], 99.50th=[68682], 99.90th=[68682], 99.95th=[68682], 00:19:02.290 | 99.99th=[68682] 00:19:02.290 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:19:02.290 slat (usec): min=4, max=16837, avg=137.08, stdev=792.16 00:19:02.290 clat (usec): min=1539, max=80149, avg=19897.84, stdev=11500.83 00:19:02.290 lat (usec): min=1551, max=80166, avg=20034.92, stdev=11551.46 00:19:02.290 clat percentiles (usec): 00:19:02.290 | 1.00th=[ 9372], 5.00th=[10683], 10.00th=[11994], 20.00th=[12387], 00:19:02.290 | 30.00th=[12649], 40.00th=[12780], 50.00th=[13304], 60.00th=[20317], 00:19:02.290 | 70.00th=[23987], 80.00th=[26084], 90.00th=[31851], 95.00th=[42206], 00:19:02.290 | 99.00th=[69731], 99.50th=[78119], 99.90th=[78119], 99.95th=[78119], 00:19:02.290 | 99.99th=[80217] 00:19:02.290 bw ( KiB/s): min=12288, max=16368, per=20.94%, avg=14328.00, stdev=2885.00, samples=2 00:19:02.290 iops : min= 3072, max= 4092, avg=3582.00, stdev=721.25, samples=2 00:19:02.290 lat (usec) : 1000=0.01% 00:19:02.290 lat (msec) : 2=0.10%, 10=3.77%, 20=61.01%, 50=33.49%, 100=1.61% 00:19:02.290 cpu : usr=3.49%, sys=8.27%, ctx=371, majf=0, minf=1 00:19:02.290 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:19:02.290 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:02.290 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:02.290 issued rwts: total=3198,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:02.290 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:02.290 00:19:02.290 Run status group 0 (all jobs): 00:19:02.290 READ: bw=60.9MiB/s (63.8MB/s), 12.4MiB/s-20.0MiB/s (13.0MB/s-20.9MB/s), io=61.5MiB (64.5MB), run=1002-1010msec 00:19:02.290 WRITE: bw=66.8MiB/s (70.1MB/s), 13.9MiB/s-21.9MiB/s (14.6MB/s-23.0MB/s), io=67.5MiB (70.8MB), run=1002-1010msec 00:19:02.290 00:19:02.290 Disk stats (read/write): 00:19:02.290 nvme0n1: ios=3122/3237, merge=0/0, ticks=27603/30783, in_queue=58386, util=86.57% 00:19:02.290 nvme0n2: ios=4362/4608, merge=0/0, ticks=17161/16214, in_queue=33375, util=93.70% 00:19:02.290 nvme0n3: ios=3610/3695, merge=0/0, ticks=33726/35854, in_queue=69580, util=97.81% 00:19:02.290 nvme0n4: ios=2584/2641, merge=0/0, ticks=20475/23668, in_queue=44143, util=96.73% 00:19:02.290 21:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:19:02.290 21:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=908767 00:19:02.290 21:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:19:02.290 21:25:36 nvmf_tcp.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:19:02.290 [global] 00:19:02.290 thread=1 00:19:02.290 invalidate=1 00:19:02.290 rw=read 00:19:02.290 time_based=1 00:19:02.290 runtime=10 00:19:02.290 ioengine=libaio 00:19:02.290 direct=1 00:19:02.290 bs=4096 00:19:02.290 iodepth=1 00:19:02.290 norandommap=1 00:19:02.290 numjobs=1 00:19:02.290 00:19:02.290 [job0] 00:19:02.290 filename=/dev/nvme0n1 00:19:02.290 [job1] 00:19:02.290 filename=/dev/nvme0n2 00:19:02.290 [job2] 00:19:02.290 filename=/dev/nvme0n3 00:19:02.290 [job3] 00:19:02.290 filename=/dev/nvme0n4 00:19:02.290 Could not set queue depth (nvme0n1) 00:19:02.290 Could not set queue depth (nvme0n2) 00:19:02.290 Could not set queue depth (nvme0n3) 00:19:02.290 Could not set queue depth (nvme0n4) 00:19:02.548 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.548 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.548 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.548 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:19:02.548 fio-3.35 00:19:02.548 Starting 4 threads 00:19:05.834 21:25:39 nvmf_tcp.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:19:05.834 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:19:05.834 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=290816, buflen=4096 00:19:05.834 fio: pid=908864, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:05.834 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:05.834 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:19:05.834 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=1327104, buflen=4096 00:19:05.834 fio: pid=908863, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.092 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.092 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:19:06.092 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=35082240, buflen=4096 00:19:06.092 fio: pid=908861, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.351 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.351 21:25:40 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:19:06.351 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=1130496, buflen=4096 00:19:06.351 fio: pid=908862, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:19:06.351 00:19:06.351 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=908861: Thu Jul 11 21:25:40 2024 00:19:06.351 read: IOPS=2512, BW=9.81MiB/s (10.3MB/s)(33.5MiB/3409msec) 00:19:06.351 slat (usec): min=4, max=15721, avg=14.16, stdev=251.86 00:19:06.351 clat (usec): min=197, max=41087, avg=378.66, stdev=2296.41 00:19:06.351 lat (usec): min=203, max=56808, avg=392.82, stdev=2366.63 00:19:06.351 clat percentiles (usec): 00:19:06.351 | 1.00th=[ 208], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 225], 00:19:06.351 | 30.00th=[ 231], 40.00th=[ 237], 50.00th=[ 245], 60.00th=[ 251], 00:19:06.351 | 70.00th=[ 255], 80.00th=[ 265], 90.00th=[ 281], 95.00th=[ 289], 00:19:06.351 | 99.00th=[ 441], 99.50th=[ 537], 99.90th=[41157], 99.95th=[41157], 00:19:06.351 | 99.99th=[41157] 00:19:06.351 bw ( KiB/s): min= 96, max=16768, per=100.00%, avg=10982.67, stdev=7292.04, samples=6 00:19:06.351 iops : min= 24, max= 4192, avg=2745.67, stdev=1823.01, samples=6 00:19:06.351 lat (usec) : 250=60.11%, 500=39.19%, 750=0.35%, 1000=0.01% 00:19:06.351 lat (msec) : 50=0.33% 00:19:06.351 cpu : usr=1.85%, sys=3.61%, ctx=8570, majf=0, minf=1 00:19:06.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 issued rwts: total=8566,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.351 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=908862: Thu Jul 11 21:25:40 2024 00:19:06.351 read: IOPS=75, BW=300KiB/s (307kB/s)(1104KiB/3684msec) 00:19:06.351 slat (usec): min=5, max=21946, avg=122.26, stdev=1419.72 00:19:06.351 clat (usec): min=242, max=44958, avg=13139.68, stdev=18954.33 00:19:06.351 lat (usec): min=250, max=62988, avg=13262.25, stdev=19176.45 00:19:06.351 clat percentiles (usec): 00:19:06.351 | 1.00th=[ 249], 5.00th=[ 255], 10.00th=[ 265], 20.00th=[ 281], 00:19:06.351 | 30.00th=[ 289], 40.00th=[ 293], 50.00th=[ 297], 60.00th=[ 400], 00:19:06.351 | 70.00th=[40633], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:06.351 | 99.00th=[42206], 99.50th=[42206], 99.90th=[44827], 99.95th=[44827], 00:19:06.351 | 99.99th=[44827] 00:19:06.351 bw ( KiB/s): min= 95, max= 1552, per=3.09%, avg=310.71, stdev=547.44, samples=7 00:19:06.351 iops : min= 23, max= 388, avg=77.57, stdev=136.91, samples=7 00:19:06.351 lat (usec) : 250=1.44%, 500=66.06%, 750=0.72% 00:19:06.351 lat (msec) : 50=31.41% 00:19:06.351 cpu : usr=0.14%, sys=0.03%, ctx=281, majf=0, minf=1 00:19:06.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 complete : 0=0.4%, 4=99.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 issued rwts: total=277,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.351 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=908863: Thu Jul 11 21:25:40 2024 00:19:06.351 read: IOPS=102, BW=410KiB/s (420kB/s)(1296KiB/3162msec) 00:19:06.351 slat (usec): min=4, max=8851, avg=38.82, stdev=490.38 00:19:06.351 clat (usec): min=233, max=44984, avg=9649.80, stdev=17097.52 00:19:06.351 lat (usec): min=238, max=49971, avg=9688.70, stdev=17158.97 00:19:06.351 clat percentiles (usec): 00:19:06.351 | 1.00th=[ 235], 5.00th=[ 249], 10.00th=[ 262], 20.00th=[ 297], 00:19:06.351 | 30.00th=[ 314], 40.00th=[ 359], 50.00th=[ 388], 60.00th=[ 441], 00:19:06.351 | 70.00th=[ 498], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:06.351 | 99.00th=[41157], 99.50th=[41681], 99.90th=[44827], 99.95th=[44827], 00:19:06.351 | 99.99th=[44827] 00:19:06.351 bw ( KiB/s): min= 88, max= 104, per=0.97%, avg=97.33, stdev= 6.02, samples=6 00:19:06.351 iops : min= 22, max= 26, avg=24.33, stdev= 1.51, samples=6 00:19:06.351 lat (usec) : 250=5.23%, 500=65.23%, 750=6.46% 00:19:06.351 lat (msec) : 50=22.77% 00:19:06.351 cpu : usr=0.06%, sys=0.13%, ctx=326, majf=0, minf=1 00:19:06.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 issued rwts: total=325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.351 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=908864: Thu Jul 11 21:25:40 2024 00:19:06.351 read: IOPS=24, BW=98.1KiB/s (100kB/s)(284KiB/2896msec) 00:19:06.351 slat (nsec): min=10433, max=37071, avg=19686.64, stdev=7930.27 00:19:06.351 clat (usec): min=548, max=41340, avg=40411.07, stdev=4798.75 00:19:06.351 lat (usec): min=577, max=41351, avg=40430.74, stdev=4797.61 00:19:06.351 clat percentiles (usec): 00:19:06.351 | 1.00th=[ 545], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:19:06.351 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:19:06.351 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:19:06.351 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:19:06.351 | 99.99th=[41157] 00:19:06.351 bw ( KiB/s): min= 96, max= 104, per=0.99%, avg=99.20, stdev= 4.38, samples=5 00:19:06.351 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:19:06.351 lat (usec) : 750=1.39% 00:19:06.351 lat (msec) : 50=97.22% 00:19:06.351 cpu : usr=0.10%, sys=0.00%, ctx=74, majf=0, minf=1 00:19:06.351 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:19:06.351 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:06.351 issued rwts: total=72,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:06.351 latency : target=0, window=0, percentile=100.00%, depth=1 00:19:06.351 00:19:06.351 Run status group 0 (all jobs): 00:19:06.351 READ: bw=9.79MiB/s (10.3MB/s), 98.1KiB/s-9.81MiB/s (100kB/s-10.3MB/s), io=36.1MiB (37.8MB), run=2896-3684msec 00:19:06.351 00:19:06.351 Disk stats (read/write): 00:19:06.351 nvme0n1: ios=8563/0, merge=0/0, ticks=3048/0, in_queue=3048, util=94.54% 00:19:06.351 nvme0n2: ios=274/0, merge=0/0, ticks=3542/0, in_queue=3542, util=95.55% 00:19:06.351 nvme0n3: ios=300/0, merge=0/0, ticks=3077/0, in_queue=3077, util=96.47% 00:19:06.351 nvme0n4: ios=119/0, merge=0/0, ticks=3433/0, in_queue=3433, util=99.18% 00:19:06.609 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.609 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:19:06.867 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:06.867 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:19:07.124 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.124 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:19:07.382 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:19:07.382 21:25:41 nvmf_tcp.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:19:07.640 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:19:07.640 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # wait 908767 00:19:07.640 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:19:07.640 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:19:07.640 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:19:07.640 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:19:07.640 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:19:07.641 nvmf hotplug test: fio failed as expected 00:19:07.641 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:07.899 rmmod nvme_tcp 00:19:07.899 rmmod nvme_fabrics 00:19:07.899 rmmod nvme_keyring 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 906870 ']' 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 906870 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 906870 ']' 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 906870 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:07.899 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 906870 00:19:08.156 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:08.156 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:08.156 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 906870' 00:19:08.156 killing process with pid 906870 00:19:08.156 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 906870 00:19:08.156 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 906870 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:08.416 21:25:42 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.323 21:25:44 nvmf_tcp.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:10.323 00:19:10.323 real 0m23.233s 00:19:10.323 user 1m21.246s 00:19:10.323 sys 0m6.283s 00:19:10.323 21:25:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:10.323 21:25:44 nvmf_tcp.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:19:10.323 ************************************ 00:19:10.323 END TEST nvmf_fio_target 00:19:10.323 ************************************ 00:19:10.323 21:25:44 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:10.323 21:25:44 nvmf_tcp -- nvmf/nvmf.sh@56 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:10.323 21:25:44 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:10.323 21:25:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:10.323 21:25:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:10.323 ************************************ 00:19:10.323 START TEST nvmf_bdevio 00:19:10.323 ************************************ 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:19:10.323 * Looking for test storage... 00:19:10.323 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:10.323 21:25:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:19:10.582 21:25:45 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:12.488 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:12.488 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:12.488 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:12.488 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:12.488 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.488 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:19:12.488 00:19:12.488 --- 10.0.0.2 ping statistics --- 00:19:12.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.488 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:19:12.488 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:12.488 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.488 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.077 ms 00:19:12.488 00:19:12.488 --- 10.0.0.1 ping statistics --- 00:19:12.488 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.488 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:12.489 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=911485 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 911485 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 911485 ']' 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.747 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:12.747 [2024-07-11 21:25:47.305985] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:12.747 [2024-07-11 21:25:47.306059] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.747 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.747 [2024-07-11 21:25:47.377543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:12.747 [2024-07-11 21:25:47.473493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.747 [2024-07-11 21:25:47.473557] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.747 [2024-07-11 21:25:47.473574] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.747 [2024-07-11 21:25:47.473587] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.747 [2024-07-11 21:25:47.473599] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.747 [2024-07-11 21:25:47.473686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:12.747 [2024-07-11 21:25:47.474779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:19:12.747 [2024-07-11 21:25:47.474853] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:19:12.747 [2024-07-11 21:25:47.474858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.007 [2024-07-11 21:25:47.626592] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.007 Malloc0 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.007 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:13.008 [2024-07-11 21:25:47.680111] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:13.008 { 00:19:13.008 "params": { 00:19:13.008 "name": "Nvme$subsystem", 00:19:13.008 "trtype": "$TEST_TRANSPORT", 00:19:13.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:13.008 "adrfam": "ipv4", 00:19:13.008 "trsvcid": "$NVMF_PORT", 00:19:13.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:13.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:13.008 "hdgst": ${hdgst:-false}, 00:19:13.008 "ddgst": ${ddgst:-false} 00:19:13.008 }, 00:19:13.008 "method": "bdev_nvme_attach_controller" 00:19:13.008 } 00:19:13.008 EOF 00:19:13.008 )") 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:19:13.008 21:25:47 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:13.008 "params": { 00:19:13.008 "name": "Nvme1", 00:19:13.008 "trtype": "tcp", 00:19:13.008 "traddr": "10.0.0.2", 00:19:13.008 "adrfam": "ipv4", 00:19:13.008 "trsvcid": "4420", 00:19:13.008 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:13.008 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.008 "hdgst": false, 00:19:13.008 "ddgst": false 00:19:13.008 }, 00:19:13.008 "method": "bdev_nvme_attach_controller" 00:19:13.008 }' 00:19:13.008 [2024-07-11 21:25:47.726803] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:19:13.008 [2024-07-11 21:25:47.726871] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid911625 ] 00:19:13.008 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.267 [2024-07-11 21:25:47.787867] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:13.267 [2024-07-11 21:25:47.880305] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:13.267 [2024-07-11 21:25:47.880356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.267 [2024-07-11 21:25:47.880359] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:13.526 I/O targets: 00:19:13.526 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:19:13.526 00:19:13.526 00:19:13.526 CUnit - A unit testing framework for C - Version 2.1-3 00:19:13.526 http://cunit.sourceforge.net/ 00:19:13.526 00:19:13.526 00:19:13.526 Suite: bdevio tests on: Nvme1n1 00:19:13.526 Test: blockdev write read block ...passed 00:19:13.526 Test: blockdev write zeroes read block ...passed 00:19:13.526 Test: blockdev write zeroes read no split ...passed 00:19:13.784 Test: blockdev write zeroes read split ...passed 00:19:13.784 Test: blockdev write zeroes read split partial ...passed 00:19:13.784 Test: blockdev reset ...[2024-07-11 21:25:48.374610] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:19:13.784 [2024-07-11 21:25:48.374715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x54ec90 (9): Bad file descriptor 00:19:13.784 [2024-07-11 21:25:48.392219] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:13.784 passed 00:19:13.784 Test: blockdev write read 8 blocks ...passed 00:19:13.784 Test: blockdev write read size > 128k ...passed 00:19:13.784 Test: blockdev write read invalid size ...passed 00:19:13.784 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:19:13.784 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:19:13.784 Test: blockdev write read max offset ...passed 00:19:14.070 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:19:14.070 Test: blockdev writev readv 8 blocks ...passed 00:19:14.070 Test: blockdev writev readv 30 x 1block ...passed 00:19:14.070 Test: blockdev writev readv block ...passed 00:19:14.070 Test: blockdev writev readv size > 128k ...passed 00:19:14.070 Test: blockdev writev readv size > 128k in two iovs ...passed 00:19:14.070 Test: blockdev comparev and writev ...[2024-07-11 21:25:48.648601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.070 [2024-07-11 21:25:48.648638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:19:14.070 [2024-07-11 21:25:48.648663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.070 [2024-07-11 21:25:48.648681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:19:14.070 [2024-07-11 21:25:48.649059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.070 [2024-07-11 21:25:48.649085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:19:14.070 [2024-07-11 21:25:48.649107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.070 [2024-07-11 21:25:48.649123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:19:14.070 [2024-07-11 21:25:48.649524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.070 [2024-07-11 21:25:48.649548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:19:14.071 [2024-07-11 21:25:48.649570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.071 [2024-07-11 21:25:48.649592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:19:14.071 [2024-07-11 21:25:48.649957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.071 [2024-07-11 21:25:48.649982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:19:14.071 [2024-07-11 21:25:48.650003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:19:14.071 [2024-07-11 21:25:48.650019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:19:14.071 passed 00:19:14.071 Test: blockdev nvme passthru rw ...passed 00:19:14.071 Test: blockdev nvme passthru vendor specific ...[2024-07-11 21:25:48.733111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.071 [2024-07-11 21:25:48.733138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:19:14.071 [2024-07-11 21:25:48.733286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.071 [2024-07-11 21:25:48.733308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:19:14.071 [2024-07-11 21:25:48.733460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.071 [2024-07-11 21:25:48.733483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:19:14.071 [2024-07-11 21:25:48.733634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:19:14.071 [2024-07-11 21:25:48.733657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:19:14.071 passed 00:19:14.071 Test: blockdev nvme admin passthru ...passed 00:19:14.071 Test: blockdev copy ...passed 00:19:14.071 00:19:14.071 Run Summary: Type Total Ran Passed Failed Inactive 00:19:14.071 suites 1 1 n/a 0 0 00:19:14.071 tests 23 23 23 0 0 00:19:14.071 asserts 152 152 152 0 n/a 00:19:14.071 00:19:14.071 Elapsed time = 1.230 seconds 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:14.329 21:25:48 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:14.329 rmmod nvme_tcp 00:19:14.329 rmmod nvme_fabrics 00:19:14.329 rmmod nvme_keyring 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 911485 ']' 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 911485 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 911485 ']' 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 911485 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 911485 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 911485' 00:19:14.329 killing process with pid 911485 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 911485 00:19:14.329 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 911485 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:14.588 21:25:49 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.129 21:25:51 nvmf_tcp.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:17.129 00:19:17.129 real 0m6.335s 00:19:17.129 user 0m10.445s 00:19:17.129 sys 0m2.053s 00:19:17.129 21:25:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:17.129 21:25:51 nvmf_tcp.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:19:17.129 ************************************ 00:19:17.129 END TEST nvmf_bdevio 00:19:17.129 ************************************ 00:19:17.129 21:25:51 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:19:17.129 21:25:51 nvmf_tcp -- nvmf/nvmf.sh@57 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:17.129 21:25:51 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:19:17.129 21:25:51 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.129 21:25:51 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:19:17.129 ************************************ 00:19:17.129 START TEST nvmf_auth_target 00:19:17.129 ************************************ 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:19:17.129 * Looking for test storage... 00:19:17.129 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:19:17.129 21:25:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:19.029 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:19.030 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:19.030 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:19.030 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:19.030 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:19.030 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:19.030 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:19:19.030 00:19:19.030 --- 10.0.0.2 ping statistics --- 00:19:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.030 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:19.030 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:19.030 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:19:19.030 00:19:19.030 --- 10.0.0.1 ping statistics --- 00:19:19.030 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:19.030 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=913696 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 913696 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 913696 ']' 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.030 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=913720 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=37f0ca5f16e1c611c40b5e0170cd8708f4da6bf151c4768d 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.77p 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 37f0ca5f16e1c611c40b5e0170cd8708f4da6bf151c4768d 0 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 37f0ca5f16e1c611c40b5e0170cd8708f4da6bf151c4768d 0 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=37f0ca5f16e1c611c40b5e0170cd8708f4da6bf151c4768d 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.77p 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.77p 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.77p 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0fb047f5679d7dc1320622155d954be7507cb6e9494ffadfc285b7f880ab2f20 00:19:19.289 21:25:53 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.xmA 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0fb047f5679d7dc1320622155d954be7507cb6e9494ffadfc285b7f880ab2f20 3 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0fb047f5679d7dc1320622155d954be7507cb6e9494ffadfc285b7f880ab2f20 3 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0fb047f5679d7dc1320622155d954be7507cb6e9494ffadfc285b7f880ab2f20 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.xmA 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.xmA 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.xmA 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=cf32a5d8f2e5e6246f801019ef701fe0 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.jGx 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key cf32a5d8f2e5e6246f801019ef701fe0 1 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 cf32a5d8f2e5e6246f801019ef701fe0 1 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=cf32a5d8f2e5e6246f801019ef701fe0 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:19.289 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.jGx 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.jGx 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.jGx 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=674ed55e52acf6b381c6444f9c7d852933d6a3d6a295bb25 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.aQ9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 674ed55e52acf6b381c6444f9c7d852933d6a3d6a295bb25 2 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 674ed55e52acf6b381c6444f9c7d852933d6a3d6a295bb25 2 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=674ed55e52acf6b381c6444f9c7d852933d6a3d6a295bb25 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.aQ9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.aQ9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.aQ9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8dba3f9541e0539f0aec27c9e57979936b4b62cb647c52b9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.uBP 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8dba3f9541e0539f0aec27c9e57979936b4b62cb647c52b9 2 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8dba3f9541e0539f0aec27c9e57979936b4b62cb647c52b9 2 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8dba3f9541e0539f0aec27c9e57979936b4b62cb647c52b9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.uBP 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.uBP 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.uBP 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e9e782086601a79071732b2de004a5b 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.iIF 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e9e782086601a79071732b2de004a5b 1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e9e782086601a79071732b2de004a5b 1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e9e782086601a79071732b2de004a5b 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.iIF 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.iIF 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.iIF 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@727 -- # key=dc593bae66652894734f24d97c10bc5e102c056985a7fd8a23940ce344ae9dee 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.Sh9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key dc593bae66652894734f24d97c10bc5e102c056985a7fd8a23940ce344ae9dee 3 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 dc593bae66652894734f24d97c10bc5e102c056985a7fd8a23940ce344ae9dee 3 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # key=dc593bae66652894734f24d97c10bc5e102c056985a7fd8a23940ce344ae9dee 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.Sh9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.Sh9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.Sh9 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 913696 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 913696 ']' 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:19.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.548 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 913720 /var/tmp/host.sock 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 913720 ']' 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:19:19.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:19.806 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.77p 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.77p 00:19:20.063 21:25:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.77p 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.xmA ]] 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xmA 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xmA 00:19:20.319 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.xmA 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.jGx 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.jGx 00:19:20.576 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.jGx 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.aQ9 ]] 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aQ9 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aQ9 00:19:20.833 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.aQ9 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.uBP 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.uBP 00:19:21.090 21:25:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.uBP 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.iIF ]] 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iIF 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iIF 00:19:21.348 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.iIF 00:19:21.604 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:19:21.604 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.Sh9 00:19:21.604 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:21.605 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:21.605 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:21.605 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.Sh9 00:19:21.605 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.Sh9 00:19:21.861 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:19:21.861 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:19:21.861 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:21.861 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:21.861 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:21.861 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.118 21:25:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:22.681 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:22.681 { 00:19:22.681 "cntlid": 1, 00:19:22.681 "qid": 0, 00:19:22.681 "state": "enabled", 00:19:22.681 "thread": "nvmf_tgt_poll_group_000", 00:19:22.681 "listen_address": { 00:19:22.681 "trtype": "TCP", 00:19:22.681 "adrfam": "IPv4", 00:19:22.681 "traddr": "10.0.0.2", 00:19:22.681 "trsvcid": "4420" 00:19:22.681 }, 00:19:22.681 "peer_address": { 00:19:22.681 "trtype": "TCP", 00:19:22.681 "adrfam": "IPv4", 00:19:22.681 "traddr": "10.0.0.1", 00:19:22.681 "trsvcid": "52942" 00:19:22.681 }, 00:19:22.681 "auth": { 00:19:22.681 "state": "completed", 00:19:22.681 "digest": "sha256", 00:19:22.681 "dhgroup": "null" 00:19:22.681 } 00:19:22.681 } 00:19:22.681 ]' 00:19:22.681 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:22.950 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:23.207 21:25:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:24.140 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.140 21:25:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.398 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:24.654 00:19:24.654 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:24.654 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:24.654 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:24.912 { 00:19:24.912 "cntlid": 3, 00:19:24.912 "qid": 0, 00:19:24.912 "state": "enabled", 00:19:24.912 "thread": "nvmf_tgt_poll_group_000", 00:19:24.912 "listen_address": { 00:19:24.912 "trtype": "TCP", 00:19:24.912 "adrfam": "IPv4", 00:19:24.912 "traddr": "10.0.0.2", 00:19:24.912 "trsvcid": "4420" 00:19:24.912 }, 00:19:24.912 "peer_address": { 00:19:24.912 "trtype": "TCP", 00:19:24.912 "adrfam": "IPv4", 00:19:24.912 "traddr": "10.0.0.1", 00:19:24.912 "trsvcid": "52964" 00:19:24.912 }, 00:19:24.912 "auth": { 00:19:24.912 "state": "completed", 00:19:24.912 "digest": "sha256", 00:19:24.912 "dhgroup": "null" 00:19:24.912 } 00:19:24.912 } 00:19:24.912 ]' 00:19:24.912 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:25.169 21:25:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:25.427 21:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:19:26.365 21:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:26.365 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:26.365 21:26:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:26.365 21:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.365 21:26:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.365 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.365 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:26.365 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.365 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.622 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:26.880 00:19:26.880 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:26.880 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:26.880 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:27.137 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:27.137 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:27.137 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:27.137 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:27.396 { 00:19:27.396 "cntlid": 5, 00:19:27.396 "qid": 0, 00:19:27.396 "state": "enabled", 00:19:27.396 "thread": "nvmf_tgt_poll_group_000", 00:19:27.396 "listen_address": { 00:19:27.396 "trtype": "TCP", 00:19:27.396 "adrfam": "IPv4", 00:19:27.396 "traddr": "10.0.0.2", 00:19:27.396 "trsvcid": "4420" 00:19:27.396 }, 00:19:27.396 "peer_address": { 00:19:27.396 "trtype": "TCP", 00:19:27.396 "adrfam": "IPv4", 00:19:27.396 "traddr": "10.0.0.1", 00:19:27.396 "trsvcid": "33844" 00:19:27.396 }, 00:19:27.396 "auth": { 00:19:27.396 "state": "completed", 00:19:27.396 "digest": "sha256", 00:19:27.396 "dhgroup": "null" 00:19:27.396 } 00:19:27.396 } 00:19:27.396 ]' 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:27.396 21:26:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:27.396 21:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:27.396 21:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:27.396 21:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:27.654 21:26:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:28.590 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.590 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:28.847 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:29.105 00:19:29.363 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:29.363 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:29.363 21:26:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:29.363 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:29.363 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:29.620 { 00:19:29.620 "cntlid": 7, 00:19:29.620 "qid": 0, 00:19:29.620 "state": "enabled", 00:19:29.620 "thread": "nvmf_tgt_poll_group_000", 00:19:29.620 "listen_address": { 00:19:29.620 "trtype": "TCP", 00:19:29.620 "adrfam": "IPv4", 00:19:29.620 "traddr": "10.0.0.2", 00:19:29.620 "trsvcid": "4420" 00:19:29.620 }, 00:19:29.620 "peer_address": { 00:19:29.620 "trtype": "TCP", 00:19:29.620 "adrfam": "IPv4", 00:19:29.620 "traddr": "10.0.0.1", 00:19:29.620 "trsvcid": "33886" 00:19:29.620 }, 00:19:29.620 "auth": { 00:19:29.620 "state": "completed", 00:19:29.620 "digest": "sha256", 00:19:29.620 "dhgroup": "null" 00:19:29.620 } 00:19:29.620 } 00:19:29.620 ]' 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:29.620 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:29.899 21:26:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:30.846 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:30.846 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.103 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.104 21:26:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:31.361 00:19:31.361 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:31.361 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:31.361 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:31.620 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:31.620 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:31.620 21:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:31.620 21:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:31.878 { 00:19:31.878 "cntlid": 9, 00:19:31.878 "qid": 0, 00:19:31.878 "state": "enabled", 00:19:31.878 "thread": "nvmf_tgt_poll_group_000", 00:19:31.878 "listen_address": { 00:19:31.878 "trtype": "TCP", 00:19:31.878 "adrfam": "IPv4", 00:19:31.878 "traddr": "10.0.0.2", 00:19:31.878 "trsvcid": "4420" 00:19:31.878 }, 00:19:31.878 "peer_address": { 00:19:31.878 "trtype": "TCP", 00:19:31.878 "adrfam": "IPv4", 00:19:31.878 "traddr": "10.0.0.1", 00:19:31.878 "trsvcid": "33914" 00:19:31.878 }, 00:19:31.878 "auth": { 00:19:31.878 "state": "completed", 00:19:31.878 "digest": "sha256", 00:19:31.878 "dhgroup": "ffdhe2048" 00:19:31.878 } 00:19:31.878 } 00:19:31.878 ]' 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:31.878 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:32.135 21:26:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:33.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.071 21:26:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:33.329 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:19:33.329 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:33.329 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:33.329 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.330 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:33.588 00:19:33.588 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:33.588 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:33.588 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:33.845 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.845 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:33.845 21:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:33.845 21:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:33.845 21:26:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:33.845 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:33.845 { 00:19:33.845 "cntlid": 11, 00:19:33.845 "qid": 0, 00:19:33.845 "state": "enabled", 00:19:33.845 "thread": "nvmf_tgt_poll_group_000", 00:19:33.845 "listen_address": { 00:19:33.845 "trtype": "TCP", 00:19:33.845 "adrfam": "IPv4", 00:19:33.845 "traddr": "10.0.0.2", 00:19:33.845 "trsvcid": "4420" 00:19:33.845 }, 00:19:33.845 "peer_address": { 00:19:33.845 "trtype": "TCP", 00:19:33.845 "adrfam": "IPv4", 00:19:33.845 "traddr": "10.0.0.1", 00:19:33.845 "trsvcid": "33954" 00:19:33.845 }, 00:19:33.845 "auth": { 00:19:33.845 "state": "completed", 00:19:33.845 "digest": "sha256", 00:19:33.846 "dhgroup": "ffdhe2048" 00:19:33.846 } 00:19:33.846 } 00:19:33.846 ]' 00:19:33.846 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:34.102 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:34.103 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:34.103 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:34.103 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:34.103 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:34.103 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:34.103 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:34.361 21:26:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:35.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.297 21:26:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.556 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:35.814 00:19:35.814 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:35.814 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:35.814 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:36.072 { 00:19:36.072 "cntlid": 13, 00:19:36.072 "qid": 0, 00:19:36.072 "state": "enabled", 00:19:36.072 "thread": "nvmf_tgt_poll_group_000", 00:19:36.072 "listen_address": { 00:19:36.072 "trtype": "TCP", 00:19:36.072 "adrfam": "IPv4", 00:19:36.072 "traddr": "10.0.0.2", 00:19:36.072 "trsvcid": "4420" 00:19:36.072 }, 00:19:36.072 "peer_address": { 00:19:36.072 "trtype": "TCP", 00:19:36.072 "adrfam": "IPv4", 00:19:36.072 "traddr": "10.0.0.1", 00:19:36.072 "trsvcid": "33968" 00:19:36.072 }, 00:19:36.072 "auth": { 00:19:36.072 "state": "completed", 00:19:36.072 "digest": "sha256", 00:19:36.072 "dhgroup": "ffdhe2048" 00:19:36.072 } 00:19:36.072 } 00:19:36.072 ]' 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:36.072 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:36.330 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:36.330 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:36.330 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:36.330 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:36.330 21:26:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:36.588 21:26:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:37.524 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.524 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:19:37.782 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:37.783 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:38.041 00:19:38.041 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:38.041 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:38.041 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:38.299 { 00:19:38.299 "cntlid": 15, 00:19:38.299 "qid": 0, 00:19:38.299 "state": "enabled", 00:19:38.299 "thread": "nvmf_tgt_poll_group_000", 00:19:38.299 "listen_address": { 00:19:38.299 "trtype": "TCP", 00:19:38.299 "adrfam": "IPv4", 00:19:38.299 "traddr": "10.0.0.2", 00:19:38.299 "trsvcid": "4420" 00:19:38.299 }, 00:19:38.299 "peer_address": { 00:19:38.299 "trtype": "TCP", 00:19:38.299 "adrfam": "IPv4", 00:19:38.299 "traddr": "10.0.0.1", 00:19:38.299 "trsvcid": "40554" 00:19:38.299 }, 00:19:38.299 "auth": { 00:19:38.299 "state": "completed", 00:19:38.299 "digest": "sha256", 00:19:38.299 "dhgroup": "ffdhe2048" 00:19:38.299 } 00:19:38.299 } 00:19:38.299 ]' 00:19:38.299 21:26:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:38.299 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:38.299 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:38.299 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:19:38.299 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:38.557 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:38.557 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:38.557 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:38.816 21:26:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:39.753 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:39.753 21:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.011 21:26:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.011 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.011 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:40.268 00:19:40.268 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:40.268 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:40.268 21:26:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:40.525 { 00:19:40.525 "cntlid": 17, 00:19:40.525 "qid": 0, 00:19:40.525 "state": "enabled", 00:19:40.525 "thread": "nvmf_tgt_poll_group_000", 00:19:40.525 "listen_address": { 00:19:40.525 "trtype": "TCP", 00:19:40.525 "adrfam": "IPv4", 00:19:40.525 "traddr": "10.0.0.2", 00:19:40.525 "trsvcid": "4420" 00:19:40.525 }, 00:19:40.525 "peer_address": { 00:19:40.525 "trtype": "TCP", 00:19:40.525 "adrfam": "IPv4", 00:19:40.525 "traddr": "10.0.0.1", 00:19:40.525 "trsvcid": "40588" 00:19:40.525 }, 00:19:40.525 "auth": { 00:19:40.525 "state": "completed", 00:19:40.525 "digest": "sha256", 00:19:40.525 "dhgroup": "ffdhe3072" 00:19:40.525 } 00:19:40.525 } 00:19:40.525 ]' 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:40.525 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:40.784 21:26:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:41.723 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.723 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:41.981 21:26:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:42.240 00:19:42.498 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:42.498 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:42.498 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:42.756 { 00:19:42.756 "cntlid": 19, 00:19:42.756 "qid": 0, 00:19:42.756 "state": "enabled", 00:19:42.756 "thread": "nvmf_tgt_poll_group_000", 00:19:42.756 "listen_address": { 00:19:42.756 "trtype": "TCP", 00:19:42.756 "adrfam": "IPv4", 00:19:42.756 "traddr": "10.0.0.2", 00:19:42.756 "trsvcid": "4420" 00:19:42.756 }, 00:19:42.756 "peer_address": { 00:19:42.756 "trtype": "TCP", 00:19:42.756 "adrfam": "IPv4", 00:19:42.756 "traddr": "10.0.0.1", 00:19:42.756 "trsvcid": "40614" 00:19:42.756 }, 00:19:42.756 "auth": { 00:19:42.756 "state": "completed", 00:19:42.756 "digest": "sha256", 00:19:42.756 "dhgroup": "ffdhe3072" 00:19:42.756 } 00:19:42.756 } 00:19:42.756 ]' 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:42.756 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:43.014 21:26:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:43.950 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:43.950 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.208 21:26:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:44.467 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:44.748 { 00:19:44.748 "cntlid": 21, 00:19:44.748 "qid": 0, 00:19:44.748 "state": "enabled", 00:19:44.748 "thread": "nvmf_tgt_poll_group_000", 00:19:44.748 "listen_address": { 00:19:44.748 "trtype": "TCP", 00:19:44.748 "adrfam": "IPv4", 00:19:44.748 "traddr": "10.0.0.2", 00:19:44.748 "trsvcid": "4420" 00:19:44.748 }, 00:19:44.748 "peer_address": { 00:19:44.748 "trtype": "TCP", 00:19:44.748 "adrfam": "IPv4", 00:19:44.748 "traddr": "10.0.0.1", 00:19:44.748 "trsvcid": "40648" 00:19:44.748 }, 00:19:44.748 "auth": { 00:19:44.748 "state": "completed", 00:19:44.748 "digest": "sha256", 00:19:44.748 "dhgroup": "ffdhe3072" 00:19:44.748 } 00:19:44.748 } 00:19:44.748 ]' 00:19:44.748 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:45.009 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:45.267 21:26:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:46.203 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.203 21:26:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:46.461 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:47.031 00:19:47.031 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:47.031 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:47.031 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:47.289 { 00:19:47.289 "cntlid": 23, 00:19:47.289 "qid": 0, 00:19:47.289 "state": "enabled", 00:19:47.289 "thread": "nvmf_tgt_poll_group_000", 00:19:47.289 "listen_address": { 00:19:47.289 "trtype": "TCP", 00:19:47.289 "adrfam": "IPv4", 00:19:47.289 "traddr": "10.0.0.2", 00:19:47.289 "trsvcid": "4420" 00:19:47.289 }, 00:19:47.289 "peer_address": { 00:19:47.289 "trtype": "TCP", 00:19:47.289 "adrfam": "IPv4", 00:19:47.289 "traddr": "10.0.0.1", 00:19:47.289 "trsvcid": "40676" 00:19:47.289 }, 00:19:47.289 "auth": { 00:19:47.289 "state": "completed", 00:19:47.289 "digest": "sha256", 00:19:47.289 "dhgroup": "ffdhe3072" 00:19:47.289 } 00:19:47.289 } 00:19:47.289 ]' 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:47.289 21:26:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:47.547 21:26:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:48.482 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:48.482 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.046 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:49.304 00:19:49.305 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:49.305 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:49.305 21:26:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:49.563 { 00:19:49.563 "cntlid": 25, 00:19:49.563 "qid": 0, 00:19:49.563 "state": "enabled", 00:19:49.563 "thread": "nvmf_tgt_poll_group_000", 00:19:49.563 "listen_address": { 00:19:49.563 "trtype": "TCP", 00:19:49.563 "adrfam": "IPv4", 00:19:49.563 "traddr": "10.0.0.2", 00:19:49.563 "trsvcid": "4420" 00:19:49.563 }, 00:19:49.563 "peer_address": { 00:19:49.563 "trtype": "TCP", 00:19:49.563 "adrfam": "IPv4", 00:19:49.563 "traddr": "10.0.0.1", 00:19:49.563 "trsvcid": "53190" 00:19:49.563 }, 00:19:49.563 "auth": { 00:19:49.563 "state": "completed", 00:19:49.563 "digest": "sha256", 00:19:49.563 "dhgroup": "ffdhe4096" 00:19:49.563 } 00:19:49.563 } 00:19:49.563 ]' 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:49.563 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:49.821 21:26:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:50.755 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:50.755 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.013 21:26:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:19:51.580 00:19:51.580 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:51.580 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:51.580 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:51.838 { 00:19:51.838 "cntlid": 27, 00:19:51.838 "qid": 0, 00:19:51.838 "state": "enabled", 00:19:51.838 "thread": "nvmf_tgt_poll_group_000", 00:19:51.838 "listen_address": { 00:19:51.838 "trtype": "TCP", 00:19:51.838 "adrfam": "IPv4", 00:19:51.838 "traddr": "10.0.0.2", 00:19:51.838 "trsvcid": "4420" 00:19:51.838 }, 00:19:51.838 "peer_address": { 00:19:51.838 "trtype": "TCP", 00:19:51.838 "adrfam": "IPv4", 00:19:51.838 "traddr": "10.0.0.1", 00:19:51.838 "trsvcid": "53218" 00:19:51.838 }, 00:19:51.838 "auth": { 00:19:51.838 "state": "completed", 00:19:51.838 "digest": "sha256", 00:19:51.838 "dhgroup": "ffdhe4096" 00:19:51.838 } 00:19:51.838 } 00:19:51.838 ]' 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:51.838 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:51.839 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:52.096 21:26:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:53.047 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.047 21:26:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.305 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:19:53.873 00:19:53.873 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:53.873 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:53.873 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:54.131 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:54.131 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:54.131 21:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.131 21:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:54.131 21:26:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.131 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:54.131 { 00:19:54.131 "cntlid": 29, 00:19:54.131 "qid": 0, 00:19:54.131 "state": "enabled", 00:19:54.131 "thread": "nvmf_tgt_poll_group_000", 00:19:54.131 "listen_address": { 00:19:54.131 "trtype": "TCP", 00:19:54.131 "adrfam": "IPv4", 00:19:54.131 "traddr": "10.0.0.2", 00:19:54.131 "trsvcid": "4420" 00:19:54.131 }, 00:19:54.131 "peer_address": { 00:19:54.131 "trtype": "TCP", 00:19:54.131 "adrfam": "IPv4", 00:19:54.132 "traddr": "10.0.0.1", 00:19:54.132 "trsvcid": "53242" 00:19:54.132 }, 00:19:54.132 "auth": { 00:19:54.132 "state": "completed", 00:19:54.132 "digest": "sha256", 00:19:54.132 "dhgroup": "ffdhe4096" 00:19:54.132 } 00:19:54.132 } 00:19:54.132 ]' 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:54.132 21:26:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:54.390 21:26:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:55.328 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.328 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:55.586 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:55.587 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:55.587 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:19:56.155 00:19:56.155 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:56.155 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:56.155 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:56.412 { 00:19:56.412 "cntlid": 31, 00:19:56.412 "qid": 0, 00:19:56.412 "state": "enabled", 00:19:56.412 "thread": "nvmf_tgt_poll_group_000", 00:19:56.412 "listen_address": { 00:19:56.412 "trtype": "TCP", 00:19:56.412 "adrfam": "IPv4", 00:19:56.412 "traddr": "10.0.0.2", 00:19:56.412 "trsvcid": "4420" 00:19:56.412 }, 00:19:56.412 "peer_address": { 00:19:56.412 "trtype": "TCP", 00:19:56.412 "adrfam": "IPv4", 00:19:56.412 "traddr": "10.0.0.1", 00:19:56.412 "trsvcid": "53274" 00:19:56.412 }, 00:19:56.412 "auth": { 00:19:56.412 "state": "completed", 00:19:56.412 "digest": "sha256", 00:19:56.412 "dhgroup": "ffdhe4096" 00:19:56.412 } 00:19:56.412 } 00:19:56.412 ]' 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:56.412 21:26:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:56.412 21:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:19:56.412 21:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:56.412 21:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:56.412 21:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:56.412 21:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:56.670 21:26:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:19:57.608 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.608 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:57.866 21:26:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:19:58.433 00:19:58.433 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:19:58.433 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:19:58.433 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:19:58.690 { 00:19:58.690 "cntlid": 33, 00:19:58.690 "qid": 0, 00:19:58.690 "state": "enabled", 00:19:58.690 "thread": "nvmf_tgt_poll_group_000", 00:19:58.690 "listen_address": { 00:19:58.690 "trtype": "TCP", 00:19:58.690 "adrfam": "IPv4", 00:19:58.690 "traddr": "10.0.0.2", 00:19:58.690 "trsvcid": "4420" 00:19:58.690 }, 00:19:58.690 "peer_address": { 00:19:58.690 "trtype": "TCP", 00:19:58.690 "adrfam": "IPv4", 00:19:58.690 "traddr": "10.0.0.1", 00:19:58.690 "trsvcid": "52852" 00:19:58.690 }, 00:19:58.690 "auth": { 00:19:58.690 "state": "completed", 00:19:58.690 "digest": "sha256", 00:19:58.690 "dhgroup": "ffdhe6144" 00:19:58.690 } 00:19:58.690 } 00:19:58.690 ]' 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:19:58.690 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:19:58.948 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:19:58.948 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:19:58.948 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:19:58.948 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:19:58.948 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:19:59.205 21:26:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:00.173 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.173 21:26:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.430 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:00.995 00:20:00.995 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:00.995 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:00.995 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:01.253 { 00:20:01.253 "cntlid": 35, 00:20:01.253 "qid": 0, 00:20:01.253 "state": "enabled", 00:20:01.253 "thread": "nvmf_tgt_poll_group_000", 00:20:01.253 "listen_address": { 00:20:01.253 "trtype": "TCP", 00:20:01.253 "adrfam": "IPv4", 00:20:01.253 "traddr": "10.0.0.2", 00:20:01.253 "trsvcid": "4420" 00:20:01.253 }, 00:20:01.253 "peer_address": { 00:20:01.253 "trtype": "TCP", 00:20:01.253 "adrfam": "IPv4", 00:20:01.253 "traddr": "10.0.0.1", 00:20:01.253 "trsvcid": "52870" 00:20:01.253 }, 00:20:01.253 "auth": { 00:20:01.253 "state": "completed", 00:20:01.253 "digest": "sha256", 00:20:01.253 "dhgroup": "ffdhe6144" 00:20:01.253 } 00:20:01.253 } 00:20:01.253 ]' 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:01.253 21:26:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:01.253 21:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:01.513 21:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:01.513 21:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:01.773 21:26:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:02.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.707 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:02.965 21:26:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:03.530 00:20:03.530 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:03.530 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:03.530 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:03.787 { 00:20:03.787 "cntlid": 37, 00:20:03.787 "qid": 0, 00:20:03.787 "state": "enabled", 00:20:03.787 "thread": "nvmf_tgt_poll_group_000", 00:20:03.787 "listen_address": { 00:20:03.787 "trtype": "TCP", 00:20:03.787 "adrfam": "IPv4", 00:20:03.787 "traddr": "10.0.0.2", 00:20:03.787 "trsvcid": "4420" 00:20:03.787 }, 00:20:03.787 "peer_address": { 00:20:03.787 "trtype": "TCP", 00:20:03.787 "adrfam": "IPv4", 00:20:03.787 "traddr": "10.0.0.1", 00:20:03.787 "trsvcid": "52900" 00:20:03.787 }, 00:20:03.787 "auth": { 00:20:03.787 "state": "completed", 00:20:03.787 "digest": "sha256", 00:20:03.787 "dhgroup": "ffdhe6144" 00:20:03.787 } 00:20:03.787 } 00:20:03.787 ]' 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:03.787 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:04.045 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:04.304 21:26:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:05.240 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.240 21:26:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:05.498 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:06.065 00:20:06.065 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:06.065 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:06.065 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:06.324 { 00:20:06.324 "cntlid": 39, 00:20:06.324 "qid": 0, 00:20:06.324 "state": "enabled", 00:20:06.324 "thread": "nvmf_tgt_poll_group_000", 00:20:06.324 "listen_address": { 00:20:06.324 "trtype": "TCP", 00:20:06.324 "adrfam": "IPv4", 00:20:06.324 "traddr": "10.0.0.2", 00:20:06.324 "trsvcid": "4420" 00:20:06.324 }, 00:20:06.324 "peer_address": { 00:20:06.324 "trtype": "TCP", 00:20:06.324 "adrfam": "IPv4", 00:20:06.324 "traddr": "10.0.0.1", 00:20:06.324 "trsvcid": "52922" 00:20:06.324 }, 00:20:06.324 "auth": { 00:20:06.324 "state": "completed", 00:20:06.324 "digest": "sha256", 00:20:06.324 "dhgroup": "ffdhe6144" 00:20:06.324 } 00:20:06.324 } 00:20:06.324 ]' 00:20:06.324 21:26:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:06.324 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:06.889 21:26:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:07.825 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:07.825 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:08.084 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.085 21:26:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.018 00:20:09.018 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:09.018 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:09.018 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:09.276 { 00:20:09.276 "cntlid": 41, 00:20:09.276 "qid": 0, 00:20:09.276 "state": "enabled", 00:20:09.276 "thread": "nvmf_tgt_poll_group_000", 00:20:09.276 "listen_address": { 00:20:09.276 "trtype": "TCP", 00:20:09.276 "adrfam": "IPv4", 00:20:09.276 "traddr": "10.0.0.2", 00:20:09.276 "trsvcid": "4420" 00:20:09.276 }, 00:20:09.276 "peer_address": { 00:20:09.276 "trtype": "TCP", 00:20:09.276 "adrfam": "IPv4", 00:20:09.276 "traddr": "10.0.0.1", 00:20:09.276 "trsvcid": "58236" 00:20:09.276 }, 00:20:09.276 "auth": { 00:20:09.276 "state": "completed", 00:20:09.276 "digest": "sha256", 00:20:09.276 "dhgroup": "ffdhe8192" 00:20:09.276 } 00:20:09.276 } 00:20:09.276 ]' 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.276 21:26:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.533 21:26:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:10.466 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.466 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:10.722 21:26:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:11.652 00:20:11.652 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:11.652 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:11.652 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:11.908 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:11.908 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:11.908 21:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:11.908 21:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:11.908 21:26:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:11.908 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:11.908 { 00:20:11.908 "cntlid": 43, 00:20:11.908 "qid": 0, 00:20:11.908 "state": "enabled", 00:20:11.908 "thread": "nvmf_tgt_poll_group_000", 00:20:11.908 "listen_address": { 00:20:11.908 "trtype": "TCP", 00:20:11.908 "adrfam": "IPv4", 00:20:11.908 "traddr": "10.0.0.2", 00:20:11.908 "trsvcid": "4420" 00:20:11.908 }, 00:20:11.908 "peer_address": { 00:20:11.908 "trtype": "TCP", 00:20:11.908 "adrfam": "IPv4", 00:20:11.908 "traddr": "10.0.0.1", 00:20:11.908 "trsvcid": "58260" 00:20:11.908 }, 00:20:11.908 "auth": { 00:20:11.908 "state": "completed", 00:20:11.908 "digest": "sha256", 00:20:11.909 "dhgroup": "ffdhe8192" 00:20:11.909 } 00:20:11.909 } 00:20:11.909 ]' 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:11.909 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:12.165 21:26:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:13.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.099 21:26:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:13.357 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.291 00:20:14.291 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:14.291 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:14.291 21:26:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:14.549 { 00:20:14.549 "cntlid": 45, 00:20:14.549 "qid": 0, 00:20:14.549 "state": "enabled", 00:20:14.549 "thread": "nvmf_tgt_poll_group_000", 00:20:14.549 "listen_address": { 00:20:14.549 "trtype": "TCP", 00:20:14.549 "adrfam": "IPv4", 00:20:14.549 "traddr": "10.0.0.2", 00:20:14.549 "trsvcid": "4420" 00:20:14.549 }, 00:20:14.549 "peer_address": { 00:20:14.549 "trtype": "TCP", 00:20:14.549 "adrfam": "IPv4", 00:20:14.549 "traddr": "10.0.0.1", 00:20:14.549 "trsvcid": "58280" 00:20:14.549 }, 00:20:14.549 "auth": { 00:20:14.549 "state": "completed", 00:20:14.549 "digest": "sha256", 00:20:14.549 "dhgroup": "ffdhe8192" 00:20:14.549 } 00:20:14.549 } 00:20:14.549 ]' 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:14.549 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:14.807 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:14.807 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:14.807 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:14.807 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:14.807 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.065 21:26:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.037 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:16.296 21:26:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:17.230 00:20:17.230 21:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:17.230 21:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:17.230 21:26:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:17.488 { 00:20:17.488 "cntlid": 47, 00:20:17.488 "qid": 0, 00:20:17.488 "state": "enabled", 00:20:17.488 "thread": "nvmf_tgt_poll_group_000", 00:20:17.488 "listen_address": { 00:20:17.488 "trtype": "TCP", 00:20:17.488 "adrfam": "IPv4", 00:20:17.488 "traddr": "10.0.0.2", 00:20:17.488 "trsvcid": "4420" 00:20:17.488 }, 00:20:17.488 "peer_address": { 00:20:17.488 "trtype": "TCP", 00:20:17.488 "adrfam": "IPv4", 00:20:17.488 "traddr": "10.0.0.1", 00:20:17.488 "trsvcid": "58316" 00:20:17.488 }, 00:20:17.488 "auth": { 00:20:17.488 "state": "completed", 00:20:17.488 "digest": "sha256", 00:20:17.488 "dhgroup": "ffdhe8192" 00:20:17.488 } 00:20:17.488 } 00:20:17.488 ]' 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:17.488 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.747 21:26:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:18.711 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.711 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.968 21:26:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:19.535 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:19.535 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:19.535 { 00:20:19.535 "cntlid": 49, 00:20:19.535 "qid": 0, 00:20:19.535 "state": "enabled", 00:20:19.535 "thread": "nvmf_tgt_poll_group_000", 00:20:19.535 "listen_address": { 00:20:19.535 "trtype": "TCP", 00:20:19.535 "adrfam": "IPv4", 00:20:19.535 "traddr": "10.0.0.2", 00:20:19.535 "trsvcid": "4420" 00:20:19.535 }, 00:20:19.535 "peer_address": { 00:20:19.535 "trtype": "TCP", 00:20:19.535 "adrfam": "IPv4", 00:20:19.535 "traddr": "10.0.0.1", 00:20:19.535 "trsvcid": "52084" 00:20:19.535 }, 00:20:19.535 "auth": { 00:20:19.535 "state": "completed", 00:20:19.535 "digest": "sha384", 00:20:19.535 "dhgroup": "null" 00:20:19.535 } 00:20:19.535 } 00:20:19.535 ]' 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:19.793 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.051 21:26:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:20.988 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.246 21:26:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:21.504 00:20:21.504 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:21.504 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.504 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:21.762 { 00:20:21.762 "cntlid": 51, 00:20:21.762 "qid": 0, 00:20:21.762 "state": "enabled", 00:20:21.762 "thread": "nvmf_tgt_poll_group_000", 00:20:21.762 "listen_address": { 00:20:21.762 "trtype": "TCP", 00:20:21.762 "adrfam": "IPv4", 00:20:21.762 "traddr": "10.0.0.2", 00:20:21.762 "trsvcid": "4420" 00:20:21.762 }, 00:20:21.762 "peer_address": { 00:20:21.762 "trtype": "TCP", 00:20:21.762 "adrfam": "IPv4", 00:20:21.762 "traddr": "10.0.0.1", 00:20:21.762 "trsvcid": "52102" 00:20:21.762 }, 00:20:21.762 "auth": { 00:20:21.762 "state": "completed", 00:20:21.762 "digest": "sha384", 00:20:21.762 "dhgroup": "null" 00:20:21.762 } 00:20:21.762 } 00:20:21.762 ]' 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:21.762 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:22.020 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:22.020 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:22.020 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:22.020 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:22.278 21:26:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.209 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.209 21:26:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.467 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:23.725 00:20:23.725 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:23.725 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:23.725 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:23.983 { 00:20:23.983 "cntlid": 53, 00:20:23.983 "qid": 0, 00:20:23.983 "state": "enabled", 00:20:23.983 "thread": "nvmf_tgt_poll_group_000", 00:20:23.983 "listen_address": { 00:20:23.983 "trtype": "TCP", 00:20:23.983 "adrfam": "IPv4", 00:20:23.983 "traddr": "10.0.0.2", 00:20:23.983 "trsvcid": "4420" 00:20:23.983 }, 00:20:23.983 "peer_address": { 00:20:23.983 "trtype": "TCP", 00:20:23.983 "adrfam": "IPv4", 00:20:23.983 "traddr": "10.0.0.1", 00:20:23.983 "trsvcid": "52138" 00:20:23.983 }, 00:20:23.983 "auth": { 00:20:23.983 "state": "completed", 00:20:23.983 "digest": "sha384", 00:20:23.983 "dhgroup": "null" 00:20:23.983 } 00:20:23.983 } 00:20:23.983 ]' 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:23.983 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:24.240 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.240 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.240 21:26:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.240 21:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.612 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.612 21:26:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.612 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:25.869 00:20:25.869 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:25.869 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:25.869 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:26.126 { 00:20:26.126 "cntlid": 55, 00:20:26.126 "qid": 0, 00:20:26.126 "state": "enabled", 00:20:26.126 "thread": "nvmf_tgt_poll_group_000", 00:20:26.126 "listen_address": { 00:20:26.126 "trtype": "TCP", 00:20:26.126 "adrfam": "IPv4", 00:20:26.126 "traddr": "10.0.0.2", 00:20:26.126 "trsvcid": "4420" 00:20:26.126 }, 00:20:26.126 "peer_address": { 00:20:26.126 "trtype": "TCP", 00:20:26.126 "adrfam": "IPv4", 00:20:26.126 "traddr": "10.0.0.1", 00:20:26.126 "trsvcid": "52154" 00:20:26.126 }, 00:20:26.126 "auth": { 00:20:26.126 "state": "completed", 00:20:26.126 "digest": "sha384", 00:20:26.126 "dhgroup": "null" 00:20:26.126 } 00:20:26.126 } 00:20:26.126 ]' 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:20:26.126 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:26.383 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.383 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.383 21:27:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.640 21:27:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.571 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.571 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:27.828 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:28.085 00:20:28.085 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:28.085 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:28.085 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:28.343 { 00:20:28.343 "cntlid": 57, 00:20:28.343 "qid": 0, 00:20:28.343 "state": "enabled", 00:20:28.343 "thread": "nvmf_tgt_poll_group_000", 00:20:28.343 "listen_address": { 00:20:28.343 "trtype": "TCP", 00:20:28.343 "adrfam": "IPv4", 00:20:28.343 "traddr": "10.0.0.2", 00:20:28.343 "trsvcid": "4420" 00:20:28.343 }, 00:20:28.343 "peer_address": { 00:20:28.343 "trtype": "TCP", 00:20:28.343 "adrfam": "IPv4", 00:20:28.343 "traddr": "10.0.0.1", 00:20:28.343 "trsvcid": "57068" 00:20:28.343 }, 00:20:28.343 "auth": { 00:20:28.343 "state": "completed", 00:20:28.343 "digest": "sha384", 00:20:28.343 "dhgroup": "ffdhe2048" 00:20:28.343 } 00:20:28.343 } 00:20:28.343 ]' 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:28.343 21:27:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:28.343 21:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:28.343 21:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:28.343 21:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:28.343 21:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:28.343 21:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.601 21:27:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:29.531 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:29.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:29.789 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.047 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:30.304 00:20:30.304 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:30.304 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:30.304 21:27:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:30.562 { 00:20:30.562 "cntlid": 59, 00:20:30.562 "qid": 0, 00:20:30.562 "state": "enabled", 00:20:30.562 "thread": "nvmf_tgt_poll_group_000", 00:20:30.562 "listen_address": { 00:20:30.562 "trtype": "TCP", 00:20:30.562 "adrfam": "IPv4", 00:20:30.562 "traddr": "10.0.0.2", 00:20:30.562 "trsvcid": "4420" 00:20:30.562 }, 00:20:30.562 "peer_address": { 00:20:30.562 "trtype": "TCP", 00:20:30.562 "adrfam": "IPv4", 00:20:30.562 "traddr": "10.0.0.1", 00:20:30.562 "trsvcid": "57092" 00:20:30.562 }, 00:20:30.562 "auth": { 00:20:30.562 "state": "completed", 00:20:30.562 "digest": "sha384", 00:20:30.562 "dhgroup": "ffdhe2048" 00:20:30.562 } 00:20:30.562 } 00:20:30.562 ]' 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:30.562 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:30.820 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:30.820 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:30.820 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.078 21:27:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:32.011 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.011 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.267 21:27:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:32.525 00:20:32.525 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:32.525 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:32.525 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:32.782 { 00:20:32.782 "cntlid": 61, 00:20:32.782 "qid": 0, 00:20:32.782 "state": "enabled", 00:20:32.782 "thread": "nvmf_tgt_poll_group_000", 00:20:32.782 "listen_address": { 00:20:32.782 "trtype": "TCP", 00:20:32.782 "adrfam": "IPv4", 00:20:32.782 "traddr": "10.0.0.2", 00:20:32.782 "trsvcid": "4420" 00:20:32.782 }, 00:20:32.782 "peer_address": { 00:20:32.782 "trtype": "TCP", 00:20:32.782 "adrfam": "IPv4", 00:20:32.782 "traddr": "10.0.0.1", 00:20:32.782 "trsvcid": "57120" 00:20:32.782 }, 00:20:32.782 "auth": { 00:20:32.782 "state": "completed", 00:20:32.782 "digest": "sha384", 00:20:32.782 "dhgroup": "ffdhe2048" 00:20:32.782 } 00:20:32.782 } 00:20:32.782 ]' 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:32.782 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:33.039 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:33.039 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:33.039 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:33.039 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:33.039 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:33.297 21:27:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:34.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.231 21:27:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.489 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:34.747 00:20:34.747 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:34.747 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.747 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:35.005 { 00:20:35.005 "cntlid": 63, 00:20:35.005 "qid": 0, 00:20:35.005 "state": "enabled", 00:20:35.005 "thread": "nvmf_tgt_poll_group_000", 00:20:35.005 "listen_address": { 00:20:35.005 "trtype": "TCP", 00:20:35.005 "adrfam": "IPv4", 00:20:35.005 "traddr": "10.0.0.2", 00:20:35.005 "trsvcid": "4420" 00:20:35.005 }, 00:20:35.005 "peer_address": { 00:20:35.005 "trtype": "TCP", 00:20:35.005 "adrfam": "IPv4", 00:20:35.005 "traddr": "10.0.0.1", 00:20:35.005 "trsvcid": "57142" 00:20:35.005 }, 00:20:35.005 "auth": { 00:20:35.005 "state": "completed", 00:20:35.005 "digest": "sha384", 00:20:35.005 "dhgroup": "ffdhe2048" 00:20:35.005 } 00:20:35.005 } 00:20:35.005 ]' 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.005 21:27:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:35.265 21:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:20:36.640 21:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.640 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.640 21:27:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:36.640 21:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.640 21:27:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.640 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.898 00:20:36.898 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:36.898 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:36.898 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:37.156 { 00:20:37.156 "cntlid": 65, 00:20:37.156 "qid": 0, 00:20:37.156 "state": "enabled", 00:20:37.156 "thread": "nvmf_tgt_poll_group_000", 00:20:37.156 "listen_address": { 00:20:37.156 "trtype": "TCP", 00:20:37.156 "adrfam": "IPv4", 00:20:37.156 "traddr": "10.0.0.2", 00:20:37.156 "trsvcid": "4420" 00:20:37.156 }, 00:20:37.156 "peer_address": { 00:20:37.156 "trtype": "TCP", 00:20:37.156 "adrfam": "IPv4", 00:20:37.156 "traddr": "10.0.0.1", 00:20:37.156 "trsvcid": "50668" 00:20:37.156 }, 00:20:37.156 "auth": { 00:20:37.156 "state": "completed", 00:20:37.156 "digest": "sha384", 00:20:37.156 "dhgroup": "ffdhe3072" 00:20:37.156 } 00:20:37.156 } 00:20:37.156 ]' 00:20:37.156 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:37.415 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:37.415 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:37.415 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:37.415 21:27:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:37.415 21:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.415 21:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.415 21:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.674 21:27:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.609 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:38.609 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.610 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.867 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:39.125 00:20:39.125 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:39.125 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:39.125 21:27:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.383 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.383 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.383 21:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:39.383 21:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.383 21:27:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:39.383 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:39.383 { 00:20:39.383 "cntlid": 67, 00:20:39.384 "qid": 0, 00:20:39.384 "state": "enabled", 00:20:39.384 "thread": "nvmf_tgt_poll_group_000", 00:20:39.384 "listen_address": { 00:20:39.384 "trtype": "TCP", 00:20:39.384 "adrfam": "IPv4", 00:20:39.384 "traddr": "10.0.0.2", 00:20:39.384 "trsvcid": "4420" 00:20:39.384 }, 00:20:39.384 "peer_address": { 00:20:39.384 "trtype": "TCP", 00:20:39.384 "adrfam": "IPv4", 00:20:39.384 "traddr": "10.0.0.1", 00:20:39.384 "trsvcid": "50692" 00:20:39.384 }, 00:20:39.384 "auth": { 00:20:39.384 "state": "completed", 00:20:39.384 "digest": "sha384", 00:20:39.384 "dhgroup": "ffdhe3072" 00:20:39.384 } 00:20:39.384 } 00:20:39.384 ]' 00:20:39.384 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.642 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.900 21:27:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:40.845 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:40.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:40.846 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.103 21:27:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:41.360 00:20:41.619 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:41.619 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:41.619 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:41.876 { 00:20:41.876 "cntlid": 69, 00:20:41.876 "qid": 0, 00:20:41.876 "state": "enabled", 00:20:41.876 "thread": "nvmf_tgt_poll_group_000", 00:20:41.876 "listen_address": { 00:20:41.876 "trtype": "TCP", 00:20:41.876 "adrfam": "IPv4", 00:20:41.876 "traddr": "10.0.0.2", 00:20:41.876 "trsvcid": "4420" 00:20:41.876 }, 00:20:41.876 "peer_address": { 00:20:41.876 "trtype": "TCP", 00:20:41.876 "adrfam": "IPv4", 00:20:41.876 "traddr": "10.0.0.1", 00:20:41.876 "trsvcid": "50714" 00:20:41.876 }, 00:20:41.876 "auth": { 00:20:41.876 "state": "completed", 00:20:41.876 "digest": "sha384", 00:20:41.876 "dhgroup": "ffdhe3072" 00:20:41.876 } 00:20:41.876 } 00:20:41.876 ]' 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:41.876 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.877 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.877 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.134 21:27:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.068 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.068 21:27:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.352 21:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.612 21:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.612 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.612 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:43.870 00:20:43.870 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:43.870 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:43.870 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:44.128 { 00:20:44.128 "cntlid": 71, 00:20:44.128 "qid": 0, 00:20:44.128 "state": "enabled", 00:20:44.128 "thread": "nvmf_tgt_poll_group_000", 00:20:44.128 "listen_address": { 00:20:44.128 "trtype": "TCP", 00:20:44.128 "adrfam": "IPv4", 00:20:44.128 "traddr": "10.0.0.2", 00:20:44.128 "trsvcid": "4420" 00:20:44.128 }, 00:20:44.128 "peer_address": { 00:20:44.128 "trtype": "TCP", 00:20:44.128 "adrfam": "IPv4", 00:20:44.128 "traddr": "10.0.0.1", 00:20:44.128 "trsvcid": "50756" 00:20:44.128 }, 00:20:44.128 "auth": { 00:20:44.128 "state": "completed", 00:20:44.128 "digest": "sha384", 00:20:44.128 "dhgroup": "ffdhe3072" 00:20:44.128 } 00:20:44.128 } 00:20:44.128 ]' 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.128 21:27:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.387 21:27:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.323 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.583 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.842 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.843 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:45.843 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:46.102 00:20:46.102 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:46.102 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:46.102 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:46.360 { 00:20:46.360 "cntlid": 73, 00:20:46.360 "qid": 0, 00:20:46.360 "state": "enabled", 00:20:46.360 "thread": "nvmf_tgt_poll_group_000", 00:20:46.360 "listen_address": { 00:20:46.360 "trtype": "TCP", 00:20:46.360 "adrfam": "IPv4", 00:20:46.360 "traddr": "10.0.0.2", 00:20:46.360 "trsvcid": "4420" 00:20:46.360 }, 00:20:46.360 "peer_address": { 00:20:46.360 "trtype": "TCP", 00:20:46.360 "adrfam": "IPv4", 00:20:46.360 "traddr": "10.0.0.1", 00:20:46.360 "trsvcid": "50770" 00:20:46.360 }, 00:20:46.360 "auth": { 00:20:46.360 "state": "completed", 00:20:46.360 "digest": "sha384", 00:20:46.360 "dhgroup": "ffdhe4096" 00:20:46.360 } 00:20:46.360 } 00:20:46.360 ]' 00:20:46.360 21:27:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.360 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.618 21:27:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:47.551 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.552 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:47.552 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.121 21:27:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:48.379 00:20:48.379 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:48.379 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:48.379 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:48.636 { 00:20:48.636 "cntlid": 75, 00:20:48.636 "qid": 0, 00:20:48.636 "state": "enabled", 00:20:48.636 "thread": "nvmf_tgt_poll_group_000", 00:20:48.636 "listen_address": { 00:20:48.636 "trtype": "TCP", 00:20:48.636 "adrfam": "IPv4", 00:20:48.636 "traddr": "10.0.0.2", 00:20:48.636 "trsvcid": "4420" 00:20:48.636 }, 00:20:48.636 "peer_address": { 00:20:48.636 "trtype": "TCP", 00:20:48.636 "adrfam": "IPv4", 00:20:48.636 "traddr": "10.0.0.1", 00:20:48.636 "trsvcid": "55426" 00:20:48.636 }, 00:20:48.636 "auth": { 00:20:48.636 "state": "completed", 00:20:48.636 "digest": "sha384", 00:20:48.636 "dhgroup": "ffdhe4096" 00:20:48.636 } 00:20:48.636 } 00:20:48.636 ]' 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.636 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.894 21:27:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:49.828 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:49.828 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:49.828 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:49.828 21:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:49.828 21:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.828 21:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.086 21:27:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:50.649 00:20:50.649 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:50.650 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:50.650 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:50.907 { 00:20:50.907 "cntlid": 77, 00:20:50.907 "qid": 0, 00:20:50.907 "state": "enabled", 00:20:50.907 "thread": "nvmf_tgt_poll_group_000", 00:20:50.907 "listen_address": { 00:20:50.907 "trtype": "TCP", 00:20:50.907 "adrfam": "IPv4", 00:20:50.907 "traddr": "10.0.0.2", 00:20:50.907 "trsvcid": "4420" 00:20:50.907 }, 00:20:50.907 "peer_address": { 00:20:50.907 "trtype": "TCP", 00:20:50.907 "adrfam": "IPv4", 00:20:50.907 "traddr": "10.0.0.1", 00:20:50.907 "trsvcid": "55456" 00:20:50.907 }, 00:20:50.907 "auth": { 00:20:50.907 "state": "completed", 00:20:50.907 "digest": "sha384", 00:20:50.907 "dhgroup": "ffdhe4096" 00:20:50.907 } 00:20:50.907 } 00:20:50.907 ]' 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:50.907 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:51.167 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.167 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.167 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.427 21:27:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.363 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.363 21:27:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.621 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:20:52.879 00:20:52.879 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:52.879 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:52.879 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:53.443 { 00:20:53.443 "cntlid": 79, 00:20:53.443 "qid": 0, 00:20:53.443 "state": "enabled", 00:20:53.443 "thread": "nvmf_tgt_poll_group_000", 00:20:53.443 "listen_address": { 00:20:53.443 "trtype": "TCP", 00:20:53.443 "adrfam": "IPv4", 00:20:53.443 "traddr": "10.0.0.2", 00:20:53.443 "trsvcid": "4420" 00:20:53.443 }, 00:20:53.443 "peer_address": { 00:20:53.443 "trtype": "TCP", 00:20:53.443 "adrfam": "IPv4", 00:20:53.443 "traddr": "10.0.0.1", 00:20:53.443 "trsvcid": "55476" 00:20:53.443 }, 00:20:53.443 "auth": { 00:20:53.443 "state": "completed", 00:20:53.443 "digest": "sha384", 00:20:53.443 "dhgroup": "ffdhe4096" 00:20:53.443 } 00:20:53.443 } 00:20:53.443 ]' 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:53.443 21:27:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:53.443 21:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:53.443 21:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:53.443 21:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:53.443 21:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:53.443 21:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.701 21:27:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:54.636 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.636 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:54.893 21:27:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:55.466 00:20:55.467 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:55.467 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:55.467 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.726 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.726 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.726 21:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:55.726 21:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.726 21:27:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:55.726 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:55.726 { 00:20:55.726 "cntlid": 81, 00:20:55.726 "qid": 0, 00:20:55.726 "state": "enabled", 00:20:55.726 "thread": "nvmf_tgt_poll_group_000", 00:20:55.726 "listen_address": { 00:20:55.726 "trtype": "TCP", 00:20:55.726 "adrfam": "IPv4", 00:20:55.726 "traddr": "10.0.0.2", 00:20:55.726 "trsvcid": "4420" 00:20:55.726 }, 00:20:55.726 "peer_address": { 00:20:55.726 "trtype": "TCP", 00:20:55.726 "adrfam": "IPv4", 00:20:55.726 "traddr": "10.0.0.1", 00:20:55.726 "trsvcid": "55504" 00:20:55.726 }, 00:20:55.726 "auth": { 00:20:55.726 "state": "completed", 00:20:55.726 "digest": "sha384", 00:20:55.726 "dhgroup": "ffdhe6144" 00:20:55.726 } 00:20:55.726 } 00:20:55.726 ]' 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:55.727 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:55.984 21:27:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.914 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:56.914 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.171 21:27:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:57.756 00:20:57.756 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:20:57.756 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:20:57.756 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:20:58.079 { 00:20:58.079 "cntlid": 83, 00:20:58.079 "qid": 0, 00:20:58.079 "state": "enabled", 00:20:58.079 "thread": "nvmf_tgt_poll_group_000", 00:20:58.079 "listen_address": { 00:20:58.079 "trtype": "TCP", 00:20:58.079 "adrfam": "IPv4", 00:20:58.079 "traddr": "10.0.0.2", 00:20:58.079 "trsvcid": "4420" 00:20:58.079 }, 00:20:58.079 "peer_address": { 00:20:58.079 "trtype": "TCP", 00:20:58.079 "adrfam": "IPv4", 00:20:58.079 "traddr": "10.0.0.1", 00:20:58.079 "trsvcid": "38248" 00:20:58.079 }, 00:20:58.079 "auth": { 00:20:58.079 "state": "completed", 00:20:58.079 "digest": "sha384", 00:20:58.079 "dhgroup": "ffdhe6144" 00:20:58.079 } 00:20:58.079 } 00:20:58.079 ]' 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:58.079 21:27:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.336 21:27:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.704 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:59.704 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.270 00:21:00.270 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:00.270 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:00.270 21:27:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:00.528 { 00:21:00.528 "cntlid": 85, 00:21:00.528 "qid": 0, 00:21:00.528 "state": "enabled", 00:21:00.528 "thread": "nvmf_tgt_poll_group_000", 00:21:00.528 "listen_address": { 00:21:00.528 "trtype": "TCP", 00:21:00.528 "adrfam": "IPv4", 00:21:00.528 "traddr": "10.0.0.2", 00:21:00.528 "trsvcid": "4420" 00:21:00.528 }, 00:21:00.528 "peer_address": { 00:21:00.528 "trtype": "TCP", 00:21:00.528 "adrfam": "IPv4", 00:21:00.528 "traddr": "10.0.0.1", 00:21:00.528 "trsvcid": "38264" 00:21:00.528 }, 00:21:00.528 "auth": { 00:21:00.528 "state": "completed", 00:21:00.528 "digest": "sha384", 00:21:00.528 "dhgroup": "ffdhe6144" 00:21:00.528 } 00:21:00.528 } 00:21:00.528 ]' 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.528 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.786 21:27:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.718 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.718 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:01.977 21:27:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:02.544 00:21:02.544 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:02.544 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:02.544 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:02.802 { 00:21:02.802 "cntlid": 87, 00:21:02.802 "qid": 0, 00:21:02.802 "state": "enabled", 00:21:02.802 "thread": "nvmf_tgt_poll_group_000", 00:21:02.802 "listen_address": { 00:21:02.802 "trtype": "TCP", 00:21:02.802 "adrfam": "IPv4", 00:21:02.802 "traddr": "10.0.0.2", 00:21:02.802 "trsvcid": "4420" 00:21:02.802 }, 00:21:02.802 "peer_address": { 00:21:02.802 "trtype": "TCP", 00:21:02.802 "adrfam": "IPv4", 00:21:02.802 "traddr": "10.0.0.1", 00:21:02.802 "trsvcid": "38300" 00:21:02.802 }, 00:21:02.802 "auth": { 00:21:02.802 "state": "completed", 00:21:02.802 "digest": "sha384", 00:21:02.802 "dhgroup": "ffdhe6144" 00:21:02.802 } 00:21:02.802 } 00:21:02.802 ]' 00:21:02.802 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.060 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:03.318 21:27:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.251 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.251 21:27:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:04.509 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:05.443 00:21:05.443 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:05.443 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:05.443 21:27:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:05.701 { 00:21:05.701 "cntlid": 89, 00:21:05.701 "qid": 0, 00:21:05.701 "state": "enabled", 00:21:05.701 "thread": "nvmf_tgt_poll_group_000", 00:21:05.701 "listen_address": { 00:21:05.701 "trtype": "TCP", 00:21:05.701 "adrfam": "IPv4", 00:21:05.701 "traddr": "10.0.0.2", 00:21:05.701 "trsvcid": "4420" 00:21:05.701 }, 00:21:05.701 "peer_address": { 00:21:05.701 "trtype": "TCP", 00:21:05.701 "adrfam": "IPv4", 00:21:05.701 "traddr": "10.0.0.1", 00:21:05.701 "trsvcid": "38336" 00:21:05.701 }, 00:21:05.701 "auth": { 00:21:05.701 "state": "completed", 00:21:05.701 "digest": "sha384", 00:21:05.701 "dhgroup": "ffdhe8192" 00:21:05.701 } 00:21:05.701 } 00:21:05.701 ]' 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.701 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.959 21:27:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:06.893 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:07.151 21:27:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:08.084 00:21:08.084 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:08.084 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:08.084 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:08.342 { 00:21:08.342 "cntlid": 91, 00:21:08.342 "qid": 0, 00:21:08.342 "state": "enabled", 00:21:08.342 "thread": "nvmf_tgt_poll_group_000", 00:21:08.342 "listen_address": { 00:21:08.342 "trtype": "TCP", 00:21:08.342 "adrfam": "IPv4", 00:21:08.342 "traddr": "10.0.0.2", 00:21:08.342 "trsvcid": "4420" 00:21:08.342 }, 00:21:08.342 "peer_address": { 00:21:08.342 "trtype": "TCP", 00:21:08.342 "adrfam": "IPv4", 00:21:08.342 "traddr": "10.0.0.1", 00:21:08.342 "trsvcid": "36488" 00:21:08.342 }, 00:21:08.342 "auth": { 00:21:08.342 "state": "completed", 00:21:08.342 "digest": "sha384", 00:21:08.342 "dhgroup": "ffdhe8192" 00:21:08.342 } 00:21:08.342 } 00:21:08.342 ]' 00:21:08.342 21:27:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.342 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:08.599 21:27:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:21:09.529 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:09.787 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.045 21:27:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:10.978 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:10.978 { 00:21:10.978 "cntlid": 93, 00:21:10.978 "qid": 0, 00:21:10.978 "state": "enabled", 00:21:10.978 "thread": "nvmf_tgt_poll_group_000", 00:21:10.978 "listen_address": { 00:21:10.978 "trtype": "TCP", 00:21:10.978 "adrfam": "IPv4", 00:21:10.978 "traddr": "10.0.0.2", 00:21:10.978 "trsvcid": "4420" 00:21:10.978 }, 00:21:10.978 "peer_address": { 00:21:10.978 "trtype": "TCP", 00:21:10.978 "adrfam": "IPv4", 00:21:10.978 "traddr": "10.0.0.1", 00:21:10.978 "trsvcid": "36520" 00:21:10.978 }, 00:21:10.978 "auth": { 00:21:10.978 "state": "completed", 00:21:10.978 "digest": "sha384", 00:21:10.978 "dhgroup": "ffdhe8192" 00:21:10.978 } 00:21:10.978 } 00:21:10.978 ]' 00:21:10.978 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.236 21:27:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:11.495 21:27:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:12.428 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.429 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.429 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:12.685 21:27:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:13.616 00:21:13.616 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:13.616 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:13.616 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:13.874 { 00:21:13.874 "cntlid": 95, 00:21:13.874 "qid": 0, 00:21:13.874 "state": "enabled", 00:21:13.874 "thread": "nvmf_tgt_poll_group_000", 00:21:13.874 "listen_address": { 00:21:13.874 "trtype": "TCP", 00:21:13.874 "adrfam": "IPv4", 00:21:13.874 "traddr": "10.0.0.2", 00:21:13.874 "trsvcid": "4420" 00:21:13.874 }, 00:21:13.874 "peer_address": { 00:21:13.874 "trtype": "TCP", 00:21:13.874 "adrfam": "IPv4", 00:21:13.874 "traddr": "10.0.0.1", 00:21:13.874 "trsvcid": "36554" 00:21:13.874 }, 00:21:13.874 "auth": { 00:21:13.874 "state": "completed", 00:21:13.874 "digest": "sha384", 00:21:13.874 "dhgroup": "ffdhe8192" 00:21:13.874 } 00:21:13.874 } 00:21:13.874 ]' 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.874 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:14.132 21:27:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.066 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.066 21:27:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.324 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.888 00:21:15.888 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:15.888 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:15.888 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:16.145 { 00:21:16.145 "cntlid": 97, 00:21:16.145 "qid": 0, 00:21:16.145 "state": "enabled", 00:21:16.145 "thread": "nvmf_tgt_poll_group_000", 00:21:16.145 "listen_address": { 00:21:16.145 "trtype": "TCP", 00:21:16.145 "adrfam": "IPv4", 00:21:16.145 "traddr": "10.0.0.2", 00:21:16.145 "trsvcid": "4420" 00:21:16.145 }, 00:21:16.145 "peer_address": { 00:21:16.145 "trtype": "TCP", 00:21:16.145 "adrfam": "IPv4", 00:21:16.145 "traddr": "10.0.0.1", 00:21:16.145 "trsvcid": "36564" 00:21:16.145 }, 00:21:16.145 "auth": { 00:21:16.145 "state": "completed", 00:21:16.145 "digest": "sha512", 00:21:16.145 "dhgroup": "null" 00:21:16.145 } 00:21:16.145 } 00:21:16.145 ]' 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.145 21:27:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.403 21:27:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.376 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.376 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:17.635 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.636 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.893 00:21:17.893 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:17.893 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:17.893 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.150 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.150 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.150 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:18.151 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.151 21:27:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:18.151 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:18.151 { 00:21:18.151 "cntlid": 99, 00:21:18.151 "qid": 0, 00:21:18.151 "state": "enabled", 00:21:18.151 "thread": "nvmf_tgt_poll_group_000", 00:21:18.151 "listen_address": { 00:21:18.151 "trtype": "TCP", 00:21:18.151 "adrfam": "IPv4", 00:21:18.151 "traddr": "10.0.0.2", 00:21:18.151 "trsvcid": "4420" 00:21:18.151 }, 00:21:18.151 "peer_address": { 00:21:18.151 "trtype": "TCP", 00:21:18.151 "adrfam": "IPv4", 00:21:18.151 "traddr": "10.0.0.1", 00:21:18.151 "trsvcid": "45182" 00:21:18.151 }, 00:21:18.151 "auth": { 00:21:18.151 "state": "completed", 00:21:18.151 "digest": "sha512", 00:21:18.151 "dhgroup": "null" 00:21:18.151 } 00:21:18.151 } 00:21:18.151 ]' 00:21:18.151 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:18.409 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:18.409 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:18.409 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:18.409 21:27:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:18.409 21:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.409 21:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.409 21:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.667 21:27:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.603 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.861 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:20.119 00:21:20.119 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:20.119 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:20.119 21:27:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:20.377 { 00:21:20.377 "cntlid": 101, 00:21:20.377 "qid": 0, 00:21:20.377 "state": "enabled", 00:21:20.377 "thread": "nvmf_tgt_poll_group_000", 00:21:20.377 "listen_address": { 00:21:20.377 "trtype": "TCP", 00:21:20.377 "adrfam": "IPv4", 00:21:20.377 "traddr": "10.0.0.2", 00:21:20.377 "trsvcid": "4420" 00:21:20.377 }, 00:21:20.377 "peer_address": { 00:21:20.377 "trtype": "TCP", 00:21:20.377 "adrfam": "IPv4", 00:21:20.377 "traddr": "10.0.0.1", 00:21:20.377 "trsvcid": "45208" 00:21:20.377 }, 00:21:20.377 "auth": { 00:21:20.377 "state": "completed", 00:21:20.377 "digest": "sha512", 00:21:20.377 "dhgroup": "null" 00:21:20.377 } 00:21:20.377 } 00:21:20.377 ]' 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:20.377 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:20.635 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.635 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.635 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.894 21:27:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:21.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:21.829 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.095 21:27:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.096 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.096 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:22.354 00:21:22.354 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:22.354 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:22.354 21:27:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:22.612 { 00:21:22.612 "cntlid": 103, 00:21:22.612 "qid": 0, 00:21:22.612 "state": "enabled", 00:21:22.612 "thread": "nvmf_tgt_poll_group_000", 00:21:22.612 "listen_address": { 00:21:22.612 "trtype": "TCP", 00:21:22.612 "adrfam": "IPv4", 00:21:22.612 "traddr": "10.0.0.2", 00:21:22.612 "trsvcid": "4420" 00:21:22.612 }, 00:21:22.612 "peer_address": { 00:21:22.612 "trtype": "TCP", 00:21:22.612 "adrfam": "IPv4", 00:21:22.612 "traddr": "10.0.0.1", 00:21:22.612 "trsvcid": "45250" 00:21:22.612 }, 00:21:22.612 "auth": { 00:21:22.612 "state": "completed", 00:21:22.612 "digest": "sha512", 00:21:22.612 "dhgroup": "null" 00:21:22.612 } 00:21:22.612 } 00:21:22.612 ]' 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:22.612 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:22.871 21:27:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:23.807 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:23.807 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.065 21:27:58 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:24.636 00:21:24.636 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:24.636 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:24.636 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:24.894 { 00:21:24.894 "cntlid": 105, 00:21:24.894 "qid": 0, 00:21:24.894 "state": "enabled", 00:21:24.894 "thread": "nvmf_tgt_poll_group_000", 00:21:24.894 "listen_address": { 00:21:24.894 "trtype": "TCP", 00:21:24.894 "adrfam": "IPv4", 00:21:24.894 "traddr": "10.0.0.2", 00:21:24.894 "trsvcid": "4420" 00:21:24.894 }, 00:21:24.894 "peer_address": { 00:21:24.894 "trtype": "TCP", 00:21:24.894 "adrfam": "IPv4", 00:21:24.894 "traddr": "10.0.0.1", 00:21:24.894 "trsvcid": "45290" 00:21:24.894 }, 00:21:24.894 "auth": { 00:21:24.894 "state": "completed", 00:21:24.894 "digest": "sha512", 00:21:24.894 "dhgroup": "ffdhe2048" 00:21:24.894 } 00:21:24.894 } 00:21:24.894 ]' 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:24.894 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.154 21:27:59 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:21:26.089 21:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.349 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.349 21:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:26.350 21:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.350 21:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.350 21:28:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.350 21:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:26.350 21:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:26.350 21:28:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:26.350 21:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.610 21:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:26.610 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.610 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:26.868 00:21:26.868 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:26.868 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:26.868 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:27.125 { 00:21:27.125 "cntlid": 107, 00:21:27.125 "qid": 0, 00:21:27.125 "state": "enabled", 00:21:27.125 "thread": "nvmf_tgt_poll_group_000", 00:21:27.125 "listen_address": { 00:21:27.125 "trtype": "TCP", 00:21:27.125 "adrfam": "IPv4", 00:21:27.125 "traddr": "10.0.0.2", 00:21:27.125 "trsvcid": "4420" 00:21:27.125 }, 00:21:27.125 "peer_address": { 00:21:27.125 "trtype": "TCP", 00:21:27.125 "adrfam": "IPv4", 00:21:27.125 "traddr": "10.0.0.1", 00:21:27.125 "trsvcid": "45312" 00:21:27.125 }, 00:21:27.125 "auth": { 00:21:27.125 "state": "completed", 00:21:27.125 "digest": "sha512", 00:21:27.125 "dhgroup": "ffdhe2048" 00:21:27.125 } 00:21:27.125 } 00:21:27.125 ]' 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.125 21:28:01 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.382 21:28:02 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.316 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.316 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:28.573 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:29.138 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:29.138 { 00:21:29.138 "cntlid": 109, 00:21:29.138 "qid": 0, 00:21:29.138 "state": "enabled", 00:21:29.138 "thread": "nvmf_tgt_poll_group_000", 00:21:29.138 "listen_address": { 00:21:29.138 "trtype": "TCP", 00:21:29.138 "adrfam": "IPv4", 00:21:29.138 "traddr": "10.0.0.2", 00:21:29.138 "trsvcid": "4420" 00:21:29.138 }, 00:21:29.138 "peer_address": { 00:21:29.138 "trtype": "TCP", 00:21:29.138 "adrfam": "IPv4", 00:21:29.138 "traddr": "10.0.0.1", 00:21:29.138 "trsvcid": "52234" 00:21:29.138 }, 00:21:29.138 "auth": { 00:21:29.138 "state": "completed", 00:21:29.138 "digest": "sha512", 00:21:29.138 "dhgroup": "ffdhe2048" 00:21:29.138 } 00:21:29.138 } 00:21:29.138 ]' 00:21:29.138 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.396 21:28:03 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.654 21:28:04 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:30.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.591 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:30.848 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:30.849 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:31.106 00:21:31.106 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:31.106 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:31.106 21:28:05 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.364 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.364 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.364 21:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:31.364 21:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.364 21:28:06 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:31.364 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:31.364 { 00:21:31.364 "cntlid": 111, 00:21:31.364 "qid": 0, 00:21:31.364 "state": "enabled", 00:21:31.364 "thread": "nvmf_tgt_poll_group_000", 00:21:31.364 "listen_address": { 00:21:31.364 "trtype": "TCP", 00:21:31.364 "adrfam": "IPv4", 00:21:31.364 "traddr": "10.0.0.2", 00:21:31.364 "trsvcid": "4420" 00:21:31.364 }, 00:21:31.364 "peer_address": { 00:21:31.364 "trtype": "TCP", 00:21:31.364 "adrfam": "IPv4", 00:21:31.364 "traddr": "10.0.0.1", 00:21:31.364 "trsvcid": "52272" 00:21:31.364 }, 00:21:31.364 "auth": { 00:21:31.364 "state": "completed", 00:21:31.364 "digest": "sha512", 00:21:31.364 "dhgroup": "ffdhe2048" 00:21:31.364 } 00:21:31.364 } 00:21:31.364 ]' 00:21:31.365 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:31.365 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.365 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:31.623 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:31.623 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:31.623 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.623 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.623 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:31.881 21:28:06 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:21:32.817 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.817 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:32.818 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.076 21:28:07 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:33.334 00:21:33.334 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:33.334 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:33.334 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:33.592 { 00:21:33.592 "cntlid": 113, 00:21:33.592 "qid": 0, 00:21:33.592 "state": "enabled", 00:21:33.592 "thread": "nvmf_tgt_poll_group_000", 00:21:33.592 "listen_address": { 00:21:33.592 "trtype": "TCP", 00:21:33.592 "adrfam": "IPv4", 00:21:33.592 "traddr": "10.0.0.2", 00:21:33.592 "trsvcid": "4420" 00:21:33.592 }, 00:21:33.592 "peer_address": { 00:21:33.592 "trtype": "TCP", 00:21:33.592 "adrfam": "IPv4", 00:21:33.592 "traddr": "10.0.0.1", 00:21:33.592 "trsvcid": "52296" 00:21:33.592 }, 00:21:33.592 "auth": { 00:21:33.592 "state": "completed", 00:21:33.592 "digest": "sha512", 00:21:33.592 "dhgroup": "ffdhe3072" 00:21:33.592 } 00:21:33.592 } 00:21:33.592 ]' 00:21:33.592 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.850 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:34.119 21:28:08 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:35.064 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.064 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.321 21:28:09 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:35.578 00:21:35.578 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:35.578 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:35.578 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:35.836 { 00:21:35.836 "cntlid": 115, 00:21:35.836 "qid": 0, 00:21:35.836 "state": "enabled", 00:21:35.836 "thread": "nvmf_tgt_poll_group_000", 00:21:35.836 "listen_address": { 00:21:35.836 "trtype": "TCP", 00:21:35.836 "adrfam": "IPv4", 00:21:35.836 "traddr": "10.0.0.2", 00:21:35.836 "trsvcid": "4420" 00:21:35.836 }, 00:21:35.836 "peer_address": { 00:21:35.836 "trtype": "TCP", 00:21:35.836 "adrfam": "IPv4", 00:21:35.836 "traddr": "10.0.0.1", 00:21:35.836 "trsvcid": "52326" 00:21:35.836 }, 00:21:35.836 "auth": { 00:21:35.836 "state": "completed", 00:21:35.836 "digest": "sha512", 00:21:35.836 "dhgroup": "ffdhe3072" 00:21:35.836 } 00:21:35.836 } 00:21:35.836 ]' 00:21:35.836 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.094 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:36.351 21:28:10 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:21:37.285 21:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.286 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.286 21:28:11 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.543 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:37.800 00:21:37.800 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:37.800 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:37.800 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:38.057 { 00:21:38.057 "cntlid": 117, 00:21:38.057 "qid": 0, 00:21:38.057 "state": "enabled", 00:21:38.057 "thread": "nvmf_tgt_poll_group_000", 00:21:38.057 "listen_address": { 00:21:38.057 "trtype": "TCP", 00:21:38.057 "adrfam": "IPv4", 00:21:38.057 "traddr": "10.0.0.2", 00:21:38.057 "trsvcid": "4420" 00:21:38.057 }, 00:21:38.057 "peer_address": { 00:21:38.057 "trtype": "TCP", 00:21:38.057 "adrfam": "IPv4", 00:21:38.057 "traddr": "10.0.0.1", 00:21:38.057 "trsvcid": "41720" 00:21:38.057 }, 00:21:38.057 "auth": { 00:21:38.057 "state": "completed", 00:21:38.057 "digest": "sha512", 00:21:38.057 "dhgroup": "ffdhe3072" 00:21:38.057 } 00:21:38.057 } 00:21:38.057 ]' 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.057 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:38.315 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:38.315 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:38.315 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.315 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.315 21:28:12 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.571 21:28:13 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.507 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.507 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:39.766 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:40.333 00:21:40.333 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:40.333 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:40.333 21:28:14 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:40.333 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:40.333 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:40.333 21:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:40.333 21:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.590 21:28:15 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:40.590 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:40.590 { 00:21:40.590 "cntlid": 119, 00:21:40.590 "qid": 0, 00:21:40.590 "state": "enabled", 00:21:40.590 "thread": "nvmf_tgt_poll_group_000", 00:21:40.590 "listen_address": { 00:21:40.590 "trtype": "TCP", 00:21:40.590 "adrfam": "IPv4", 00:21:40.590 "traddr": "10.0.0.2", 00:21:40.590 "trsvcid": "4420" 00:21:40.590 }, 00:21:40.590 "peer_address": { 00:21:40.590 "trtype": "TCP", 00:21:40.590 "adrfam": "IPv4", 00:21:40.590 "traddr": "10.0.0.1", 00:21:40.590 "trsvcid": "41746" 00:21:40.590 }, 00:21:40.590 "auth": { 00:21:40.590 "state": "completed", 00:21:40.590 "digest": "sha512", 00:21:40.590 "dhgroup": "ffdhe3072" 00:21:40.590 } 00:21:40.590 } 00:21:40.590 ]' 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.591 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.847 21:28:15 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:41.782 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:41.782 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.040 21:28:16 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.608 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:42.608 21:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:42.866 { 00:21:42.866 "cntlid": 121, 00:21:42.866 "qid": 0, 00:21:42.866 "state": "enabled", 00:21:42.866 "thread": "nvmf_tgt_poll_group_000", 00:21:42.866 "listen_address": { 00:21:42.866 "trtype": "TCP", 00:21:42.866 "adrfam": "IPv4", 00:21:42.866 "traddr": "10.0.0.2", 00:21:42.866 "trsvcid": "4420" 00:21:42.866 }, 00:21:42.866 "peer_address": { 00:21:42.866 "trtype": "TCP", 00:21:42.866 "adrfam": "IPv4", 00:21:42.866 "traddr": "10.0.0.1", 00:21:42.866 "trsvcid": "41776" 00:21:42.866 }, 00:21:42.866 "auth": { 00:21:42.866 "state": "completed", 00:21:42.866 "digest": "sha512", 00:21:42.866 "dhgroup": "ffdhe4096" 00:21:42.866 } 00:21:42.866 } 00:21:42.866 ]' 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.866 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.124 21:28:17 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:44.062 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.062 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.320 21:28:18 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.886 00:21:44.886 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:44.886 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.886 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:45.144 { 00:21:45.144 "cntlid": 123, 00:21:45.144 "qid": 0, 00:21:45.144 "state": "enabled", 00:21:45.144 "thread": "nvmf_tgt_poll_group_000", 00:21:45.144 "listen_address": { 00:21:45.144 "trtype": "TCP", 00:21:45.144 "adrfam": "IPv4", 00:21:45.144 "traddr": "10.0.0.2", 00:21:45.144 "trsvcid": "4420" 00:21:45.144 }, 00:21:45.144 "peer_address": { 00:21:45.144 "trtype": "TCP", 00:21:45.144 "adrfam": "IPv4", 00:21:45.144 "traddr": "10.0.0.1", 00:21:45.144 "trsvcid": "41798" 00:21:45.144 }, 00:21:45.144 "auth": { 00:21:45.144 "state": "completed", 00:21:45.144 "digest": "sha512", 00:21:45.144 "dhgroup": "ffdhe4096" 00:21:45.144 } 00:21:45.144 } 00:21:45.144 ]' 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:45.144 21:28:19 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:45.403 21:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:21:46.338 21:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:46.338 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:46.338 21:28:20 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:46.338 21:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.338 21:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.338 21:28:20 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.338 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:46.338 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.338 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:46.595 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:47.161 00:21:47.161 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:47.161 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:47.161 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.417 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.417 21:28:21 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.417 21:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:47.417 21:28:21 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:47.417 { 00:21:47.417 "cntlid": 125, 00:21:47.417 "qid": 0, 00:21:47.417 "state": "enabled", 00:21:47.417 "thread": "nvmf_tgt_poll_group_000", 00:21:47.417 "listen_address": { 00:21:47.417 "trtype": "TCP", 00:21:47.417 "adrfam": "IPv4", 00:21:47.417 "traddr": "10.0.0.2", 00:21:47.417 "trsvcid": "4420" 00:21:47.417 }, 00:21:47.417 "peer_address": { 00:21:47.417 "trtype": "TCP", 00:21:47.417 "adrfam": "IPv4", 00:21:47.417 "traddr": "10.0.0.1", 00:21:47.417 "trsvcid": "42298" 00:21:47.417 }, 00:21:47.417 "auth": { 00:21:47.417 "state": "completed", 00:21:47.417 "digest": "sha512", 00:21:47.417 "dhgroup": "ffdhe4096" 00:21:47.417 } 00:21:47.417 } 00:21:47.417 ]' 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.417 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:47.675 21:28:22 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.607 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.607 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:48.866 21:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.124 21:28:23 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.124 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.124 21:28:23 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:49.380 00:21:49.380 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:49.380 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.380 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:49.637 { 00:21:49.637 "cntlid": 127, 00:21:49.637 "qid": 0, 00:21:49.637 "state": "enabled", 00:21:49.637 "thread": "nvmf_tgt_poll_group_000", 00:21:49.637 "listen_address": { 00:21:49.637 "trtype": "TCP", 00:21:49.637 "adrfam": "IPv4", 00:21:49.637 "traddr": "10.0.0.2", 00:21:49.637 "trsvcid": "4420" 00:21:49.637 }, 00:21:49.637 "peer_address": { 00:21:49.637 "trtype": "TCP", 00:21:49.637 "adrfam": "IPv4", 00:21:49.637 "traddr": "10.0.0.1", 00:21:49.637 "trsvcid": "42328" 00:21:49.637 }, 00:21:49.637 "auth": { 00:21:49.637 "state": "completed", 00:21:49.637 "digest": "sha512", 00:21:49.637 "dhgroup": "ffdhe4096" 00:21:49.637 } 00:21:49.637 } 00:21:49.637 ]' 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.637 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:49.895 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.895 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:49.895 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.895 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.895 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:50.153 21:28:24 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.092 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.092 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.350 21:28:25 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:51.917 00:21:51.917 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:51.917 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:51.918 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:52.208 { 00:21:52.208 "cntlid": 129, 00:21:52.208 "qid": 0, 00:21:52.208 "state": "enabled", 00:21:52.208 "thread": "nvmf_tgt_poll_group_000", 00:21:52.208 "listen_address": { 00:21:52.208 "trtype": "TCP", 00:21:52.208 "adrfam": "IPv4", 00:21:52.208 "traddr": "10.0.0.2", 00:21:52.208 "trsvcid": "4420" 00:21:52.208 }, 00:21:52.208 "peer_address": { 00:21:52.208 "trtype": "TCP", 00:21:52.208 "adrfam": "IPv4", 00:21:52.208 "traddr": "10.0.0.1", 00:21:52.208 "trsvcid": "42352" 00:21:52.208 }, 00:21:52.208 "auth": { 00:21:52.208 "state": "completed", 00:21:52.208 "digest": "sha512", 00:21:52.208 "dhgroup": "ffdhe6144" 00:21:52.208 } 00:21:52.208 } 00:21:52.208 ]' 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.208 21:28:26 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.466 21:28:27 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:53.404 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:53.662 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:54.228 00:21:54.228 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:54.228 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:54.228 21:28:28 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:54.486 { 00:21:54.486 "cntlid": 131, 00:21:54.486 "qid": 0, 00:21:54.486 "state": "enabled", 00:21:54.486 "thread": "nvmf_tgt_poll_group_000", 00:21:54.486 "listen_address": { 00:21:54.486 "trtype": "TCP", 00:21:54.486 "adrfam": "IPv4", 00:21:54.486 "traddr": "10.0.0.2", 00:21:54.486 "trsvcid": "4420" 00:21:54.486 }, 00:21:54.486 "peer_address": { 00:21:54.486 "trtype": "TCP", 00:21:54.486 "adrfam": "IPv4", 00:21:54.486 "traddr": "10.0.0.1", 00:21:54.486 "trsvcid": "42388" 00:21:54.486 }, 00:21:54.486 "auth": { 00:21:54.486 "state": "completed", 00:21:54.486 "digest": "sha512", 00:21:54.486 "dhgroup": "ffdhe6144" 00:21:54.486 } 00:21:54.486 } 00:21:54.486 ]' 00:21:54.486 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.744 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:55.002 21:28:29 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:55.936 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.936 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.194 21:28:30 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:56.761 00:21:56.761 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:56.761 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:56.761 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:57.019 { 00:21:57.019 "cntlid": 133, 00:21:57.019 "qid": 0, 00:21:57.019 "state": "enabled", 00:21:57.019 "thread": "nvmf_tgt_poll_group_000", 00:21:57.019 "listen_address": { 00:21:57.019 "trtype": "TCP", 00:21:57.019 "adrfam": "IPv4", 00:21:57.019 "traddr": "10.0.0.2", 00:21:57.019 "trsvcid": "4420" 00:21:57.019 }, 00:21:57.019 "peer_address": { 00:21:57.019 "trtype": "TCP", 00:21:57.019 "adrfam": "IPv4", 00:21:57.019 "traddr": "10.0.0.1", 00:21:57.019 "trsvcid": "42430" 00:21:57.019 }, 00:21:57.019 "auth": { 00:21:57.019 "state": "completed", 00:21:57.019 "digest": "sha512", 00:21:57.019 "dhgroup": "ffdhe6144" 00:21:57.019 } 00:21:57.019 } 00:21:57.019 ]' 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.019 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:57.277 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.277 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:57.277 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.277 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.277 21:28:31 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.536 21:28:32 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.471 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:58.727 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:21:59.291 00:21:59.291 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:21:59.291 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:21:59.291 21:28:33 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:59.548 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:59.548 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:21:59.549 { 00:21:59.549 "cntlid": 135, 00:21:59.549 "qid": 0, 00:21:59.549 "state": "enabled", 00:21:59.549 "thread": "nvmf_tgt_poll_group_000", 00:21:59.549 "listen_address": { 00:21:59.549 "trtype": "TCP", 00:21:59.549 "adrfam": "IPv4", 00:21:59.549 "traddr": "10.0.0.2", 00:21:59.549 "trsvcid": "4420" 00:21:59.549 }, 00:21:59.549 "peer_address": { 00:21:59.549 "trtype": "TCP", 00:21:59.549 "adrfam": "IPv4", 00:21:59.549 "traddr": "10.0.0.1", 00:21:59.549 "trsvcid": "55442" 00:21:59.549 }, 00:21:59.549 "auth": { 00:21:59.549 "state": "completed", 00:21:59.549 "digest": "sha512", 00:21:59.549 "dhgroup": "ffdhe6144" 00:21:59.549 } 00:21:59.549 } 00:21:59.549 ]' 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.549 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.806 21:28:34 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:00.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:00.742 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.000 21:28:35 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.935 00:22:01.935 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:01.935 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:01.935 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:02.193 { 00:22:02.193 "cntlid": 137, 00:22:02.193 "qid": 0, 00:22:02.193 "state": "enabled", 00:22:02.193 "thread": "nvmf_tgt_poll_group_000", 00:22:02.193 "listen_address": { 00:22:02.193 "trtype": "TCP", 00:22:02.193 "adrfam": "IPv4", 00:22:02.193 "traddr": "10.0.0.2", 00:22:02.193 "trsvcid": "4420" 00:22:02.193 }, 00:22:02.193 "peer_address": { 00:22:02.193 "trtype": "TCP", 00:22:02.193 "adrfam": "IPv4", 00:22:02.193 "traddr": "10.0.0.1", 00:22:02.193 "trsvcid": "55458" 00:22:02.193 }, 00:22:02.193 "auth": { 00:22:02.193 "state": "completed", 00:22:02.193 "digest": "sha512", 00:22:02.193 "dhgroup": "ffdhe8192" 00:22:02.193 } 00:22:02.193 } 00:22:02.193 ]' 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.193 21:28:36 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.451 21:28:37 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.829 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.829 21:28:38 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.768 00:22:04.768 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:04.768 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:04.768 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:05.027 { 00:22:05.027 "cntlid": 139, 00:22:05.027 "qid": 0, 00:22:05.027 "state": "enabled", 00:22:05.027 "thread": "nvmf_tgt_poll_group_000", 00:22:05.027 "listen_address": { 00:22:05.027 "trtype": "TCP", 00:22:05.027 "adrfam": "IPv4", 00:22:05.027 "traddr": "10.0.0.2", 00:22:05.027 "trsvcid": "4420" 00:22:05.027 }, 00:22:05.027 "peer_address": { 00:22:05.027 "trtype": "TCP", 00:22:05.027 "adrfam": "IPv4", 00:22:05.027 "traddr": "10.0.0.1", 00:22:05.027 "trsvcid": "55474" 00:22:05.027 }, 00:22:05.027 "auth": { 00:22:05.027 "state": "completed", 00:22:05.027 "digest": "sha512", 00:22:05.027 "dhgroup": "ffdhe8192" 00:22:05.027 } 00:22:05.027 } 00:22:05.027 ]' 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:05.027 21:28:39 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:05.595 21:28:40 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:Y2YzMmE1ZDhmMmU1ZTYyNDZmODAxMDE5ZWY3MDFmZTAl51Wj: --dhchap-ctrl-secret DHHC-1:02:Njc0ZWQ1NWU1MmFjZjZiMzgxYzY0NDRmOWM3ZDg1MjkzM2Q2YTNkNmEyOTViYjI12uz5/Q==: 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:06.532 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:06.532 21:28:41 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:07.481 00:22:07.481 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:07.481 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.481 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:07.739 { 00:22:07.739 "cntlid": 141, 00:22:07.739 "qid": 0, 00:22:07.739 "state": "enabled", 00:22:07.739 "thread": "nvmf_tgt_poll_group_000", 00:22:07.739 "listen_address": { 00:22:07.739 "trtype": "TCP", 00:22:07.739 "adrfam": "IPv4", 00:22:07.739 "traddr": "10.0.0.2", 00:22:07.739 "trsvcid": "4420" 00:22:07.739 }, 00:22:07.739 "peer_address": { 00:22:07.739 "trtype": "TCP", 00:22:07.739 "adrfam": "IPv4", 00:22:07.739 "traddr": "10.0.0.1", 00:22:07.739 "trsvcid": "59990" 00:22:07.739 }, 00:22:07.739 "auth": { 00:22:07.739 "state": "completed", 00:22:07.739 "digest": "sha512", 00:22:07.739 "dhgroup": "ffdhe8192" 00:22:07.739 } 00:22:07.739 } 00:22:07.739 ]' 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.739 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:07.998 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.998 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.998 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:07.998 21:28:42 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:OGRiYTNmOTU0MWUwNTM5ZjBhZWMyN2M5ZTU3OTc5OTM2YjRiNjJjYjY0N2M1MmI5NwPrQQ==: --dhchap-ctrl-secret DHHC-1:01:N2U5ZTc4MjA4NjYwMWE3OTA3MTczMmIyZGUwMDRhNWLMwsiD: 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:09.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:09.373 21:28:43 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.373 21:28:44 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:09.373 21:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:09.374 21:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:10.308 00:22:10.308 21:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:10.308 21:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:10.308 21:28:44 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:10.595 { 00:22:10.595 "cntlid": 143, 00:22:10.595 "qid": 0, 00:22:10.595 "state": "enabled", 00:22:10.595 "thread": "nvmf_tgt_poll_group_000", 00:22:10.595 "listen_address": { 00:22:10.595 "trtype": "TCP", 00:22:10.595 "adrfam": "IPv4", 00:22:10.595 "traddr": "10.0.0.2", 00:22:10.595 "trsvcid": "4420" 00:22:10.595 }, 00:22:10.595 "peer_address": { 00:22:10.595 "trtype": "TCP", 00:22:10.595 "adrfam": "IPv4", 00:22:10.595 "traddr": "10.0.0.1", 00:22:10.595 "trsvcid": "60008" 00:22:10.595 }, 00:22:10.595 "auth": { 00:22:10.595 "state": "completed", 00:22:10.595 "digest": "sha512", 00:22:10.595 "dhgroup": "ffdhe8192" 00:22:10.595 } 00:22:10.595 } 00:22:10.595 ]' 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:10.595 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.853 21:28:45 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:11.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:11.784 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.041 21:28:46 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:12.971 00:22:12.971 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:12.971 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:12.971 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:13.229 { 00:22:13.229 "cntlid": 145, 00:22:13.229 "qid": 0, 00:22:13.229 "state": "enabled", 00:22:13.229 "thread": "nvmf_tgt_poll_group_000", 00:22:13.229 "listen_address": { 00:22:13.229 "trtype": "TCP", 00:22:13.229 "adrfam": "IPv4", 00:22:13.229 "traddr": "10.0.0.2", 00:22:13.229 "trsvcid": "4420" 00:22:13.229 }, 00:22:13.229 "peer_address": { 00:22:13.229 "trtype": "TCP", 00:22:13.229 "adrfam": "IPv4", 00:22:13.229 "traddr": "10.0.0.1", 00:22:13.229 "trsvcid": "60042" 00:22:13.229 }, 00:22:13.229 "auth": { 00:22:13.229 "state": "completed", 00:22:13.229 "digest": "sha512", 00:22:13.229 "dhgroup": "ffdhe8192" 00:22:13.229 } 00:22:13.229 } 00:22:13.229 ]' 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.229 21:28:47 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:13.486 21:28:48 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:MzdmMGNhNWYxNmUxYzYxMWM0MGI1ZTAxNzBjZDg3MDhmNGRhNmJmMTUxYzQ3Njhk5eX/kA==: --dhchap-ctrl-secret DHHC-1:03:MGZiMDQ3ZjU2NzlkN2RjMTMyMDYyMjE1NWQ5NTRiZTc1MDdjYjZlOTQ5NGZmYWRmYzI4NWI3Zjg4MGFiMmYyMFKIj9A=: 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.417 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:14.417 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:14.674 21:28:49 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:22:15.239 request: 00:22:15.239 { 00:22:15.239 "name": "nvme0", 00:22:15.239 "trtype": "tcp", 00:22:15.239 "traddr": "10.0.0.2", 00:22:15.239 "adrfam": "ipv4", 00:22:15.239 "trsvcid": "4420", 00:22:15.239 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.239 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:15.239 "prchk_reftag": false, 00:22:15.239 "prchk_guard": false, 00:22:15.239 "hdgst": false, 00:22:15.239 "ddgst": false, 00:22:15.239 "dhchap_key": "key2", 00:22:15.239 "method": "bdev_nvme_attach_controller", 00:22:15.239 "req_id": 1 00:22:15.239 } 00:22:15.239 Got JSON-RPC error response 00:22:15.239 response: 00:22:15.239 { 00:22:15.239 "code": -5, 00:22:15.239 "message": "Input/output error" 00:22:15.239 } 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:15.496 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:15.497 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:16.429 request: 00:22:16.429 { 00:22:16.429 "name": "nvme0", 00:22:16.429 "trtype": "tcp", 00:22:16.429 "traddr": "10.0.0.2", 00:22:16.429 "adrfam": "ipv4", 00:22:16.429 "trsvcid": "4420", 00:22:16.429 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.429 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.429 "prchk_reftag": false, 00:22:16.429 "prchk_guard": false, 00:22:16.429 "hdgst": false, 00:22:16.429 "ddgst": false, 00:22:16.429 "dhchap_key": "key1", 00:22:16.429 "dhchap_ctrlr_key": "ckey2", 00:22:16.429 "method": "bdev_nvme_attach_controller", 00:22:16.429 "req_id": 1 00:22:16.429 } 00:22:16.429 Got JSON-RPC error response 00:22:16.429 response: 00:22:16.429 { 00:22:16.429 "code": -5, 00:22:16.429 "message": "Input/output error" 00:22:16.429 } 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.429 21:28:50 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:16.994 request: 00:22:16.994 { 00:22:16.994 "name": "nvme0", 00:22:16.994 "trtype": "tcp", 00:22:16.994 "traddr": "10.0.0.2", 00:22:16.994 "adrfam": "ipv4", 00:22:16.994 "trsvcid": "4420", 00:22:16.994 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.994 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:16.994 "prchk_reftag": false, 00:22:16.994 "prchk_guard": false, 00:22:16.994 "hdgst": false, 00:22:16.994 "ddgst": false, 00:22:16.994 "dhchap_key": "key1", 00:22:16.994 "dhchap_ctrlr_key": "ckey1", 00:22:16.994 "method": "bdev_nvme_attach_controller", 00:22:16.994 "req_id": 1 00:22:16.994 } 00:22:16.994 Got JSON-RPC error response 00:22:16.994 response: 00:22:16.994 { 00:22:16.994 "code": -5, 00:22:16.994 "message": "Input/output error" 00:22:16.994 } 00:22:16.994 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:16.994 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:16.994 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 913696 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 913696 ']' 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 913696 00:22:16.995 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 913696 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 913696' 00:22:17.253 killing process with pid 913696 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 913696 00:22:17.253 21:28:51 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 913696 00:22:17.510 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:17.510 21:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:17.510 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=936199 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 936199 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 936199 ']' 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.511 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 936199 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 936199 ']' 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:17.768 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:17.768 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.025 21:28:52 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:18.956 00:22:18.956 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:22:18.956 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:22:18.956 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:22:19.213 { 00:22:19.213 "cntlid": 1, 00:22:19.213 "qid": 0, 00:22:19.213 "state": "enabled", 00:22:19.213 "thread": "nvmf_tgt_poll_group_000", 00:22:19.213 "listen_address": { 00:22:19.213 "trtype": "TCP", 00:22:19.213 "adrfam": "IPv4", 00:22:19.213 "traddr": "10.0.0.2", 00:22:19.213 "trsvcid": "4420" 00:22:19.213 }, 00:22:19.213 "peer_address": { 00:22:19.213 "trtype": "TCP", 00:22:19.213 "adrfam": "IPv4", 00:22:19.213 "traddr": "10.0.0.1", 00:22:19.213 "trsvcid": "60410" 00:22:19.213 }, 00:22:19.213 "auth": { 00:22:19.213 "state": "completed", 00:22:19.213 "digest": "sha512", 00:22:19.213 "dhgroup": "ffdhe8192" 00:22:19.213 } 00:22:19.213 } 00:22:19.213 ]' 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:19.213 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.214 21:28:53 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:19.471 21:28:54 nvmf_tcp.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:ZGM1OTNiYWU2NjY1Mjg5NDczNGYyNGQ5N2MxMGJjNWUxMDJjMDU2OTg1YTdmZDhhMjM5NDBjZTM0NGFlOWRlZeZ4GDc=: 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:20.403 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:20.403 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.660 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:20.918 request: 00:22:20.918 { 00:22:20.918 "name": "nvme0", 00:22:20.918 "trtype": "tcp", 00:22:20.918 "traddr": "10.0.0.2", 00:22:20.918 "adrfam": "ipv4", 00:22:20.918 "trsvcid": "4420", 00:22:20.918 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:20.918 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:20.918 "prchk_reftag": false, 00:22:20.918 "prchk_guard": false, 00:22:20.918 "hdgst": false, 00:22:20.918 "ddgst": false, 00:22:20.918 "dhchap_key": "key3", 00:22:20.918 "method": "bdev_nvme_attach_controller", 00:22:20.918 "req_id": 1 00:22:20.918 } 00:22:20.918 Got JSON-RPC error response 00:22:20.918 response: 00:22:20.918 { 00:22:20.918 "code": -5, 00:22:20.918 "message": "Input/output error" 00:22:20.918 } 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:20.918 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.176 21:28:55 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:22:21.434 request: 00:22:21.434 { 00:22:21.434 "name": "nvme0", 00:22:21.434 "trtype": "tcp", 00:22:21.434 "traddr": "10.0.0.2", 00:22:21.434 "adrfam": "ipv4", 00:22:21.434 "trsvcid": "4420", 00:22:21.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.434 "prchk_reftag": false, 00:22:21.434 "prchk_guard": false, 00:22:21.434 "hdgst": false, 00:22:21.434 "ddgst": false, 00:22:21.434 "dhchap_key": "key3", 00:22:21.434 "method": "bdev_nvme_attach_controller", 00:22:21.434 "req_id": 1 00:22:21.434 } 00:22:21.434 Got JSON-RPC error response 00:22:21.434 response: 00:22:21.434 { 00:22:21.434 "code": -5, 00:22:21.434 "message": "Input/output error" 00:22:21.434 } 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.434 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:21.692 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:21.950 request: 00:22:21.950 { 00:22:21.950 "name": "nvme0", 00:22:21.950 "trtype": "tcp", 00:22:21.950 "traddr": "10.0.0.2", 00:22:21.950 "adrfam": "ipv4", 00:22:21.950 "trsvcid": "4420", 00:22:21.950 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:21.950 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:22:21.950 "prchk_reftag": false, 00:22:21.950 "prchk_guard": false, 00:22:21.950 "hdgst": false, 00:22:21.950 "ddgst": false, 00:22:21.950 "dhchap_key": "key0", 00:22:21.950 "dhchap_ctrlr_key": "key1", 00:22:21.950 "method": "bdev_nvme_attach_controller", 00:22:21.950 "req_id": 1 00:22:21.950 } 00:22:21.950 Got JSON-RPC error response 00:22:21.950 response: 00:22:21.950 { 00:22:21.950 "code": -5, 00:22:21.950 "message": "Input/output error" 00:22:21.950 } 00:22:21.950 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:22:21.950 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:22:21.950 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:22:21.950 21:28:56 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:22:21.950 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:21.950 21:28:56 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:22:22.515 00:22:22.515 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:22:22.515 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:22:22.515 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:22.772 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:22.772 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:22.772 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 913720 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 913720 ']' 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 913720 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 913720 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 913720' 00:22:23.030 killing process with pid 913720 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 913720 00:22:23.030 21:28:57 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 913720 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:23.288 21:28:57 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:23.288 rmmod nvme_tcp 00:22:23.288 rmmod nvme_fabrics 00:22:23.288 rmmod nvme_keyring 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 936199 ']' 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 936199 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 936199 ']' 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 936199 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:23.288 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 936199 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 936199' 00:22:23.546 killing process with pid 936199 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 936199 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 936199 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:23.546 21:28:58 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.079 21:29:00 nvmf_tcp.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:26.079 21:29:00 nvmf_tcp.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.77p /tmp/spdk.key-sha256.jGx /tmp/spdk.key-sha384.uBP /tmp/spdk.key-sha512.Sh9 /tmp/spdk.key-sha512.xmA /tmp/spdk.key-sha384.aQ9 /tmp/spdk.key-sha256.iIF '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:26.079 00:22:26.079 real 3m8.938s 00:22:26.079 user 7m20.412s 00:22:26.079 sys 0m25.212s 00:22:26.079 21:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:26.079 21:29:00 nvmf_tcp.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.079 ************************************ 00:22:26.079 END TEST nvmf_auth_target 00:22:26.079 ************************************ 00:22:26.079 21:29:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:26.079 21:29:00 nvmf_tcp -- nvmf/nvmf.sh@59 -- # '[' tcp = tcp ']' 00:22:26.079 21:29:00 nvmf_tcp -- nvmf/nvmf.sh@60 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:26.079 21:29:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:22:26.079 21:29:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:26.079 21:29:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:26.079 ************************************ 00:22:26.079 START TEST nvmf_bdevio_no_huge 00:22:26.079 ************************************ 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:26.079 * Looking for test storage... 00:22:26.079 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:26.079 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:22:26.080 21:29:00 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:27.980 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:27.980 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:27.981 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:27.981 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:27.981 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:27.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:27.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.250 ms 00:22:27.981 00:22:27.981 --- 10.0.0.2 ping statistics --- 00:22:27.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.981 rtt min/avg/max/mdev = 0.250/0.250/0.250/0.000 ms 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:27.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:27.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.101 ms 00:22:27.981 00:22:27.981 --- 10.0.0.1 ping statistics --- 00:22:27.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:27.981 rtt min/avg/max/mdev = 0.101/0.101/0.101/0.000 ms 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=938959 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 938959 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 938959 ']' 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:27.981 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:27.981 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:27.981 [2024-07-11 21:29:02.602452] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:27.981 [2024-07-11 21:29:02.602547] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:27.981 [2024-07-11 21:29:02.674270] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:28.239 [2024-07-11 21:29:02.765273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.239 [2024-07-11 21:29:02.765330] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.239 [2024-07-11 21:29:02.765356] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.239 [2024-07-11 21:29:02.765370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.239 [2024-07-11 21:29:02.765383] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.239 [2024-07-11 21:29:02.765468] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:22:28.239 [2024-07-11 21:29:02.765521] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:22:28.239 [2024-07-11 21:29:02.765574] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:22:28.239 [2024-07-11 21:29:02.765576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.239 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.240 [2024-07-11 21:29:02.892195] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.240 Malloc0 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:28.240 [2024-07-11 21:29:02.930172] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:22:28.240 { 00:22:28.240 "params": { 00:22:28.240 "name": "Nvme$subsystem", 00:22:28.240 "trtype": "$TEST_TRANSPORT", 00:22:28.240 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:28.240 "adrfam": "ipv4", 00:22:28.240 "trsvcid": "$NVMF_PORT", 00:22:28.240 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:28.240 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:28.240 "hdgst": ${hdgst:-false}, 00:22:28.240 "ddgst": ${ddgst:-false} 00:22:28.240 }, 00:22:28.240 "method": "bdev_nvme_attach_controller" 00:22:28.240 } 00:22:28.240 EOF 00:22:28.240 )") 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:22:28.240 21:29:02 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:22:28.240 "params": { 00:22:28.240 "name": "Nvme1", 00:22:28.240 "trtype": "tcp", 00:22:28.240 "traddr": "10.0.0.2", 00:22:28.240 "adrfam": "ipv4", 00:22:28.240 "trsvcid": "4420", 00:22:28.240 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:28.240 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:28.240 "hdgst": false, 00:22:28.240 "ddgst": false 00:22:28.240 }, 00:22:28.240 "method": "bdev_nvme_attach_controller" 00:22:28.240 }' 00:22:28.240 [2024-07-11 21:29:02.977700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:28.240 [2024-07-11 21:29:02.977806] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid938983 ] 00:22:28.497 [2024-07-11 21:29:03.039374] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:28.497 [2024-07-11 21:29:03.125268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:28.497 [2024-07-11 21:29:03.125317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:28.497 [2024-07-11 21:29:03.125321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.754 I/O targets: 00:22:28.754 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:28.754 00:22:28.754 00:22:28.754 CUnit - A unit testing framework for C - Version 2.1-3 00:22:28.754 http://cunit.sourceforge.net/ 00:22:28.754 00:22:28.754 00:22:28.754 Suite: bdevio tests on: Nvme1n1 00:22:28.754 Test: blockdev write read block ...passed 00:22:28.754 Test: blockdev write zeroes read block ...passed 00:22:28.754 Test: blockdev write zeroes read no split ...passed 00:22:29.011 Test: blockdev write zeroes read split ...passed 00:22:29.011 Test: blockdev write zeroes read split partial ...passed 00:22:29.011 Test: blockdev reset ...[2024-07-11 21:29:03.597173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:22:29.011 [2024-07-11 21:29:03.597280] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1189b00 (9): Bad file descriptor 00:22:29.011 [2024-07-11 21:29:03.654864] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:29.011 passed 00:22:29.011 Test: blockdev write read 8 blocks ...passed 00:22:29.011 Test: blockdev write read size > 128k ...passed 00:22:29.011 Test: blockdev write read invalid size ...passed 00:22:29.011 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:29.011 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:29.011 Test: blockdev write read max offset ...passed 00:22:29.272 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:29.272 Test: blockdev writev readv 8 blocks ...passed 00:22:29.272 Test: blockdev writev readv 30 x 1block ...passed 00:22:29.272 Test: blockdev writev readv block ...passed 00:22:29.272 Test: blockdev writev readv size > 128k ...passed 00:22:29.272 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:29.272 Test: blockdev comparev and writev ...[2024-07-11 21:29:03.947117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.272 [2024-07-11 21:29:03.947153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.947178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.947196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.947578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.947603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.947626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.947642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.948029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.948054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.948076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.948093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.948435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.948460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:03.948487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:29.273 [2024-07-11 21:29:03.948504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:29.273 passed 00:22:29.273 Test: blockdev nvme passthru rw ...passed 00:22:29.273 Test: blockdev nvme passthru vendor specific ...[2024-07-11 21:29:04.031084] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.273 [2024-07-11 21:29:04.031111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:04.031269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.273 [2024-07-11 21:29:04.031292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:04.031443] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.273 [2024-07-11 21:29:04.031465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:29.273 [2024-07-11 21:29:04.031624] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:29.273 [2024-07-11 21:29:04.031646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:29.273 passed 00:22:29.562 Test: blockdev nvme admin passthru ...passed 00:22:29.562 Test: blockdev copy ...passed 00:22:29.562 00:22:29.562 Run Summary: Type Total Ran Passed Failed Inactive 00:22:29.562 suites 1 1 n/a 0 0 00:22:29.562 tests 23 23 23 0 0 00:22:29.562 asserts 152 152 152 0 n/a 00:22:29.562 00:22:29.562 Elapsed time = 1.390 seconds 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:29.820 rmmod nvme_tcp 00:22:29.820 rmmod nvme_fabrics 00:22:29.820 rmmod nvme_keyring 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 938959 ']' 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 938959 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 938959 ']' 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 938959 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 938959 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 938959' 00:22:29.820 killing process with pid 938959 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 938959 00:22:29.820 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 938959 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:30.387 21:29:04 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.291 21:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:32.291 00:22:32.291 real 0m6.530s 00:22:32.291 user 0m11.288s 00:22:32.291 sys 0m2.519s 00:22:32.291 21:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:32.291 21:29:06 nvmf_tcp.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:32.291 ************************************ 00:22:32.291 END TEST nvmf_bdevio_no_huge 00:22:32.291 ************************************ 00:22:32.291 21:29:06 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:22:32.291 21:29:06 nvmf_tcp -- nvmf/nvmf.sh@61 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:32.291 21:29:06 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:32.291 21:29:06 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:32.291 21:29:06 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:32.291 ************************************ 00:22:32.291 START TEST nvmf_tls 00:22:32.291 ************************************ 00:22:32.291 21:29:06 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:32.291 * Looking for test storage... 00:22:32.291 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:22:32.291 21:29:07 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:34.193 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:34.193 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:34.193 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:34.193 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:34.193 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:34.194 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:34.194 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.211 ms 00:22:34.194 00:22:34.194 --- 10.0.0.2 ping statistics --- 00:22:34.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.194 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:34.194 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:34.194 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:22:34.194 00:22:34.194 --- 10.0.0.1 ping statistics --- 00:22:34.194 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:34.194 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=941060 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 941060 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 941060 ']' 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:34.194 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:34.194 21:29:08 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.452 [2024-07-11 21:29:09.001121] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:34.452 [2024-07-11 21:29:09.001211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:34.452 EAL: No free 2048 kB hugepages reported on node 1 00:22:34.452 [2024-07-11 21:29:09.066161] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:34.452 [2024-07-11 21:29:09.152119] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:34.452 [2024-07-11 21:29:09.152186] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:34.452 [2024-07-11 21:29:09.152199] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:34.452 [2024-07-11 21:29:09.152209] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:34.452 [2024-07-11 21:29:09.152219] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:34.452 [2024-07-11 21:29:09.152258] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:34.452 21:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:34.452 21:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:34.452 21:29:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:34.452 21:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:34.452 21:29:09 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:34.710 21:29:09 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:34.710 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:22:34.710 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:34.710 true 00:22:34.968 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:34.968 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:22:34.968 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@73 -- # version=0 00:22:34.968 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:22:34.968 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:35.226 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.226 21:29:09 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:22:35.484 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@81 -- # version=13 00:22:35.484 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:22:35.484 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:35.742 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.742 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:22:35.999 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@89 -- # version=7 00:22:35.999 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:22:35.999 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:35.999 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:22:36.256 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:22:36.256 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:22:36.256 21:29:10 nvmf_tcp.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:36.514 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:36.514 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:22:36.771 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:22:36.771 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:22:36.771 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:37.028 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:37.028 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:37.285 21:29:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:37.285 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:37.285 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:37.285 21:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:37.285 21:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:22:37.285 21:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:22:37.286 21:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:22:37.286 21:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:22:37.286 21:29:12 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.tpmWLOOrhX 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.RRWO9wf6ri 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.tpmWLOOrhX 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.RRWO9wf6ri 00:22:37.542 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:37.799 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:38.057 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.tpmWLOOrhX 00:22:38.057 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.tpmWLOOrhX 00:22:38.057 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:38.315 [2024-07-11 21:29:12.937823] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.315 21:29:12 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:38.572 21:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:38.829 [2024-07-11 21:29:13.487264] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:38.829 [2024-07-11 21:29:13.487508] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.829 21:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:39.086 malloc0 00:22:39.086 21:29:13 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:39.342 21:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tpmWLOOrhX 00:22:39.599 [2024-07-11 21:29:14.277451] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:39.599 21:29:14 nvmf_tcp.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.tpmWLOOrhX 00:22:39.599 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.794 Initializing NVMe Controllers 00:22:51.794 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:51.794 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:51.794 Initialization complete. Launching workers. 00:22:51.794 ======================================================== 00:22:51.794 Latency(us) 00:22:51.794 Device Information : IOPS MiB/s Average min max 00:22:51.794 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7800.39 30.47 8207.50 1334.51 10257.57 00:22:51.794 ======================================================== 00:22:51.794 Total : 7800.39 30.47 8207.50 1334.51 10257.57 00:22:51.794 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpmWLOOrhX 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tpmWLOOrhX' 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=942940 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 942940 /var/tmp/bdevperf.sock 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 942940 ']' 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:51.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.794 [2024-07-11 21:29:24.446074] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:22:51.794 [2024-07-11 21:29:24.446152] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid942940 ] 00:22:51.794 EAL: No free 2048 kB hugepages reported on node 1 00:22:51.794 [2024-07-11 21:29:24.503199] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.794 [2024-07-11 21:29:24.590452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:22:51.794 21:29:24 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tpmWLOOrhX 00:22:51.794 [2024-07-11 21:29:24.953187] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:51.794 [2024-07-11 21:29:24.953326] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:51.794 TLSTESTn1 00:22:51.794 21:29:25 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:22:51.794 Running I/O for 10 seconds... 00:23:01.781 00:23:01.781 Latency(us) 00:23:01.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.781 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:01.781 Verification LBA range: start 0x0 length 0x2000 00:23:01.781 TLSTESTn1 : 10.02 3427.83 13.39 0.00 0.00 37272.99 6019.60 44273.21 00:23:01.781 =================================================================================================================== 00:23:01.781 Total : 3427.83 13.39 0.00 0.00 37272.99 6019.60 44273.21 00:23:01.781 0 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 942940 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 942940 ']' 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 942940 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 942940 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 942940' 00:23:01.781 killing process with pid 942940 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 942940 00:23:01.781 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.781 00:23:01.781 Latency(us) 00:23:01.781 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.781 =================================================================================================================== 00:23:01.781 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:01.781 [2024-07-11 21:29:35.244383] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 942940 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRWO9wf6ri 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRWO9wf6ri 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.RRWO9wf6ri 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.RRWO9wf6ri' 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=944178 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 944178 /var/tmp/bdevperf.sock 00:23:01.781 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944178 ']' 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.782 [2024-07-11 21:29:35.494857] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:01.782 [2024-07-11 21:29:35.494938] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944178 ] 00:23:01.782 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.782 [2024-07-11 21:29:35.556267] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.782 [2024-07-11 21:29:35.639550] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.RRWO9wf6ri 00:23:01.782 [2024-07-11 21:29:35.956316] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:01.782 [2024-07-11 21:29:35.956457] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:01.782 [2024-07-11 21:29:35.965257] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:01.782 [2024-07-11 21:29:35.965323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e5bb0 (107): Transport endpoint is not connected 00:23:01.782 [2024-07-11 21:29:35.966311] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x13e5bb0 (9): Bad file descriptor 00:23:01.782 [2024-07-11 21:29:35.967311] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:01.782 [2024-07-11 21:29:35.967334] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:01.782 [2024-07-11 21:29:35.967362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:01.782 request: 00:23:01.782 { 00:23:01.782 "name": "TLSTEST", 00:23:01.782 "trtype": "tcp", 00:23:01.782 "traddr": "10.0.0.2", 00:23:01.782 "adrfam": "ipv4", 00:23:01.782 "trsvcid": "4420", 00:23:01.782 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:01.782 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:01.782 "prchk_reftag": false, 00:23:01.782 "prchk_guard": false, 00:23:01.782 "hdgst": false, 00:23:01.782 "ddgst": false, 00:23:01.782 "psk": "/tmp/tmp.RRWO9wf6ri", 00:23:01.782 "method": "bdev_nvme_attach_controller", 00:23:01.782 "req_id": 1 00:23:01.782 } 00:23:01.782 Got JSON-RPC error response 00:23:01.782 response: 00:23:01.782 { 00:23:01.782 "code": -5, 00:23:01.782 "message": "Input/output error" 00:23:01.782 } 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 944178 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944178 ']' 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944178 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:01.782 21:29:35 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944178 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944178' 00:23:01.782 killing process with pid 944178 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944178 00:23:01.782 Received shutdown signal, test time was about 10.000000 seconds 00:23:01.782 00:23:01.782 Latency(us) 00:23:01.782 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:01.782 =================================================================================================================== 00:23:01.782 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:01.782 [2024-07-11 21:29:36.014683] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944178 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tpmWLOOrhX 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tpmWLOOrhX 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.tpmWLOOrhX 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tpmWLOOrhX' 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=944274 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 944274 /var/tmp/bdevperf.sock 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944274 ']' 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:01.782 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:01.782 [2024-07-11 21:29:36.278902] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:01.782 [2024-07-11 21:29:36.278986] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944274 ] 00:23:01.782 EAL: No free 2048 kB hugepages reported on node 1 00:23:01.782 [2024-07-11 21:29:36.336494] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:01.782 [2024-07-11 21:29:36.417661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:01.782 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.tpmWLOOrhX 00:23:02.041 [2024-07-11 21:29:36.764770] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.041 [2024-07-11 21:29:36.764899] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:02.041 [2024-07-11 21:29:36.770012] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:02.041 [2024-07-11 21:29:36.770046] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:02.041 [2024-07-11 21:29:36.770100] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:02.041 [2024-07-11 21:29:36.770669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffebb0 (107): Transport endpoint is not connected 00:23:02.041 [2024-07-11 21:29:36.771657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xffebb0 (9): Bad file descriptor 00:23:02.041 [2024-07-11 21:29:36.772656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:02.041 [2024-07-11 21:29:36.772685] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:02.041 [2024-07-11 21:29:36.772712] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:02.041 request: 00:23:02.041 { 00:23:02.041 "name": "TLSTEST", 00:23:02.041 "trtype": "tcp", 00:23:02.041 "traddr": "10.0.0.2", 00:23:02.041 "adrfam": "ipv4", 00:23:02.041 "trsvcid": "4420", 00:23:02.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:02.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:02.041 "prchk_reftag": false, 00:23:02.041 "prchk_guard": false, 00:23:02.041 "hdgst": false, 00:23:02.041 "ddgst": false, 00:23:02.041 "psk": "/tmp/tmp.tpmWLOOrhX", 00:23:02.041 "method": "bdev_nvme_attach_controller", 00:23:02.041 "req_id": 1 00:23:02.041 } 00:23:02.041 Got JSON-RPC error response 00:23:02.041 response: 00:23:02.041 { 00:23:02.041 "code": -5, 00:23:02.041 "message": "Input/output error" 00:23:02.041 } 00:23:02.041 21:29:36 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 944274 00:23:02.041 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944274 ']' 00:23:02.041 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944274 00:23:02.041 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.041 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.041 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944274 00:23:02.300 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:02.300 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:02.300 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944274' 00:23:02.300 killing process with pid 944274 00:23:02.300 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944274 00:23:02.300 Received shutdown signal, test time was about 10.000000 seconds 00:23:02.300 00:23:02.300 Latency(us) 00:23:02.300 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:02.300 =================================================================================================================== 00:23:02.300 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:02.300 [2024-07-11 21:29:36.824113] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:02.300 21:29:36 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944274 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpmWLOOrhX 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpmWLOOrhX 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.tpmWLOOrhX 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.tpmWLOOrhX' 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=944413 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 944413 /var/tmp/bdevperf.sock 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944413 ']' 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:02.300 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:02.301 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:02.301 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:02.301 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:02.301 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:02.595 [2024-07-11 21:29:37.095332] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:02.595 [2024-07-11 21:29:37.095414] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944413 ] 00:23:02.595 EAL: No free 2048 kB hugepages reported on node 1 00:23:02.595 [2024-07-11 21:29:37.153593] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:02.595 [2024-07-11 21:29:37.238227] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:02.853 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:02.853 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:02.853 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.tpmWLOOrhX 00:23:02.853 [2024-07-11 21:29:37.577051] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:02.853 [2024-07-11 21:29:37.577207] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:02.853 [2024-07-11 21:29:37.589329] tcp.c: 881:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:02.853 [2024-07-11 21:29:37.589364] posix.c: 589:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:02.854 [2024-07-11 21:29:37.589449] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:02.854 [2024-07-11 21:29:37.589993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2541bb0 (107): Transport endpoint is not connected 00:23:02.854 [2024-07-11 21:29:37.590980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2541bb0 (9): Bad file descriptor 00:23:02.854 [2024-07-11 21:29:37.591980] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:23:02.854 [2024-07-11 21:29:37.592004] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:02.854 [2024-07-11 21:29:37.592033] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:23:02.854 request: 00:23:02.854 { 00:23:02.854 "name": "TLSTEST", 00:23:02.854 "trtype": "tcp", 00:23:02.854 "traddr": "10.0.0.2", 00:23:02.854 "adrfam": "ipv4", 00:23:02.854 "trsvcid": "4420", 00:23:02.854 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:02.854 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:02.854 "prchk_reftag": false, 00:23:02.854 "prchk_guard": false, 00:23:02.854 "hdgst": false, 00:23:02.854 "ddgst": false, 00:23:02.854 "psk": "/tmp/tmp.tpmWLOOrhX", 00:23:02.854 "method": "bdev_nvme_attach_controller", 00:23:02.854 "req_id": 1 00:23:02.854 } 00:23:02.854 Got JSON-RPC error response 00:23:02.854 response: 00:23:02.854 { 00:23:02.854 "code": -5, 00:23:02.854 "message": "Input/output error" 00:23:02.854 } 00:23:02.854 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 944413 00:23:02.854 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944413 ']' 00:23:02.854 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944413 00:23:02.854 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:02.854 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:02.854 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944413 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944413' 00:23:03.113 killing process with pid 944413 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944413 00:23:03.113 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.113 00:23:03.113 Latency(us) 00:23:03.113 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.113 =================================================================================================================== 00:23:03.113 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:03.113 [2024-07-11 21:29:37.642511] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944413 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=944552 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 944552 /var/tmp/bdevperf.sock 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944552 ']' 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:03.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:03.113 21:29:37 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:03.372 [2024-07-11 21:29:37.895211] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:03.372 [2024-07-11 21:29:37.895291] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944552 ] 00:23:03.372 EAL: No free 2048 kB hugepages reported on node 1 00:23:03.372 [2024-07-11 21:29:37.952718] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:03.372 [2024-07-11 21:29:38.033322] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:03.372 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:03.372 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:03.372 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:03.938 [2024-07-11 21:29:38.422301] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:03.938 [2024-07-11 21:29:38.424181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1933160 (9): Bad file descriptor 00:23:03.938 [2024-07-11 21:29:38.425179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:23:03.938 [2024-07-11 21:29:38.425202] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:03.938 [2024-07-11 21:29:38.425229] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:03.938 request: 00:23:03.938 { 00:23:03.938 "name": "TLSTEST", 00:23:03.938 "trtype": "tcp", 00:23:03.938 "traddr": "10.0.0.2", 00:23:03.938 "adrfam": "ipv4", 00:23:03.938 "trsvcid": "4420", 00:23:03.938 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:03.938 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:03.938 "prchk_reftag": false, 00:23:03.938 "prchk_guard": false, 00:23:03.938 "hdgst": false, 00:23:03.938 "ddgst": false, 00:23:03.938 "method": "bdev_nvme_attach_controller", 00:23:03.938 "req_id": 1 00:23:03.938 } 00:23:03.938 Got JSON-RPC error response 00:23:03.938 response: 00:23:03.938 { 00:23:03.938 "code": -5, 00:23:03.938 "message": "Input/output error" 00:23:03.938 } 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 944552 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944552 ']' 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944552 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944552 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944552' 00:23:03.938 killing process with pid 944552 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944552 00:23:03.938 Received shutdown signal, test time was about 10.000000 seconds 00:23:03.938 00:23:03.938 Latency(us) 00:23:03.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:03.938 =================================================================================================================== 00:23:03.938 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944552 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@158 -- # killprocess 941060 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 941060 ']' 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 941060 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:03.938 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 941060 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 941060' 00:23:04.196 killing process with pid 941060 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 941060 00:23:04.196 [2024-07-11 21:29:38.717951] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 941060 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:23:04.196 21:29:38 nvmf_tcp.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:23:04.455 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:04.455 21:29:38 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.8mk28b471L 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.8mk28b471L 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=944703 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 944703 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944703 ']' 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:04.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:04.455 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.455 [2024-07-11 21:29:39.049129] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:04.455 [2024-07-11 21:29:39.049207] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:04.455 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.455 [2024-07-11 21:29:39.115558] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.455 [2024-07-11 21:29:39.206325] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:04.455 [2024-07-11 21:29:39.206386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:04.455 [2024-07-11 21:29:39.206403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:04.455 [2024-07-11 21:29:39.206417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:04.455 [2024-07-11 21:29:39.206429] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:04.455 [2024-07-11 21:29:39.206464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.8mk28b471L 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8mk28b471L 00:23:04.713 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:04.971 [2024-07-11 21:29:39.557898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:04.971 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:05.230 21:29:39 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:05.488 [2024-07-11 21:29:40.063312] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:05.488 [2024-07-11 21:29:40.063559] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:05.488 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:05.745 malloc0 00:23:05.746 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:06.003 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:06.261 [2024-07-11 21:29:40.792736] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8mk28b471L 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8mk28b471L' 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=944865 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 944865 /var/tmp/bdevperf.sock 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 944865 ']' 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:06.261 21:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:06.262 21:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:06.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:06.262 21:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:06.262 21:29:40 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:06.262 [2024-07-11 21:29:40.858351] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:06.262 [2024-07-11 21:29:40.858425] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid944865 ] 00:23:06.262 EAL: No free 2048 kB hugepages reported on node 1 00:23:06.262 [2024-07-11 21:29:40.918066] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:06.262 [2024-07-11 21:29:41.006130] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:06.518 21:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:06.518 21:29:41 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:06.518 21:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:06.775 [2024-07-11 21:29:41.344763] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:06.775 [2024-07-11 21:29:41.344907] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:06.775 TLSTESTn1 00:23:06.775 21:29:41 nvmf_tcp.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:06.775 Running I/O for 10 seconds... 00:23:18.983 00:23:18.983 Latency(us) 00:23:18.983 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.983 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:18.984 Verification LBA range: start 0x0 length 0x2000 00:23:18.984 TLSTESTn1 : 10.04 3076.42 12.02 0.00 0.00 41515.34 5776.88 50486.99 00:23:18.984 =================================================================================================================== 00:23:18.984 Total : 3076.42 12.02 0.00 0.00 41515.34 5776.88 50486.99 00:23:18.984 0 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@45 -- # killprocess 944865 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944865 ']' 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944865 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944865 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944865' 00:23:18.984 killing process with pid 944865 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944865 00:23:18.984 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.984 00:23:18.984 Latency(us) 00:23:18.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.984 =================================================================================================================== 00:23:18.984 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:18.984 [2024-07-11 21:29:51.637559] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944865 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.8mk28b471L 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8mk28b471L 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8mk28b471L 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.8mk28b471L 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.8mk28b471L' 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=946180 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 946180 /var/tmp/bdevperf.sock 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946180 ']' 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:18.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.984 21:29:51 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.984 [2024-07-11 21:29:51.913778] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:18.984 [2024-07-11 21:29:51.913859] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946180 ] 00:23:18.984 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.984 [2024-07-11 21:29:51.972405] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.984 [2024-07-11 21:29:52.057683] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:18.984 [2024-07-11 21:29:52.439618] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.984 [2024-07-11 21:29:52.439714] bdev_nvme.c:6125:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:18.984 [2024-07-11 21:29:52.439747] bdev_nvme.c:6230:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.8mk28b471L 00:23:18.984 request: 00:23:18.984 { 00:23:18.984 "name": "TLSTEST", 00:23:18.984 "trtype": "tcp", 00:23:18.984 "traddr": "10.0.0.2", 00:23:18.984 "adrfam": "ipv4", 00:23:18.984 "trsvcid": "4420", 00:23:18.984 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.984 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.984 "prchk_reftag": false, 00:23:18.984 "prchk_guard": false, 00:23:18.984 "hdgst": false, 00:23:18.984 "ddgst": false, 00:23:18.984 "psk": "/tmp/tmp.8mk28b471L", 00:23:18.984 "method": "bdev_nvme_attach_controller", 00:23:18.984 "req_id": 1 00:23:18.984 } 00:23:18.984 Got JSON-RPC error response 00:23:18.984 response: 00:23:18.984 { 00:23:18.984 "code": -1, 00:23:18.984 "message": "Operation not permitted" 00:23:18.984 } 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@36 -- # killprocess 946180 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946180 ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946180 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946180 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946180' 00:23:18.984 killing process with pid 946180 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946180 00:23:18.984 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.984 00:23:18.984 Latency(us) 00:23:18.984 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.984 =================================================================================================================== 00:23:18.984 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946180 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@37 -- # return 1 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@174 -- # killprocess 944703 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 944703 ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 944703 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 944703 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 944703' 00:23:18.984 killing process with pid 944703 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 944703 00:23:18.984 [2024-07-11 21:29:52.730039] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 944703 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=946323 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 946323 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946323 ']' 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:18.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:18.984 21:29:52 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.984 [2024-07-11 21:29:53.040210] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:18.984 [2024-07-11 21:29:53.040307] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:18.985 EAL: No free 2048 kB hugepages reported on node 1 00:23:18.985 [2024-07-11 21:29:53.108020] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.985 [2024-07-11 21:29:53.195234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:18.985 [2024-07-11 21:29:53.195301] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:18.985 [2024-07-11 21:29:53.195319] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:18.985 [2024-07-11 21:29:53.195333] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:18.985 [2024-07-11 21:29:53.195345] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:18.985 [2024-07-11 21:29:53.195376] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.8mk28b471L 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.8mk28b471L 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.8mk28b471L 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8mk28b471L 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:18.985 [2024-07-11 21:29:53.591365] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:18.985 21:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.242 21:29:53 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:19.500 [2024-07-11 21:29:54.084670] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:19.500 [2024-07-11 21:29:54.084907] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:19.500 21:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:19.758 malloc0 00:23:19.758 21:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:20.016 21:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:20.276 [2024-07-11 21:29:54.810470] tcp.c:3589:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:23:20.276 [2024-07-11 21:29:54.810523] tcp.c:3675:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:23:20.276 [2024-07-11 21:29:54.810561] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:20.276 request: 00:23:20.276 { 00:23:20.276 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:20.276 "host": "nqn.2016-06.io.spdk:host1", 00:23:20.276 "psk": "/tmp/tmp.8mk28b471L", 00:23:20.276 "method": "nvmf_subsystem_add_host", 00:23:20.276 "req_id": 1 00:23:20.276 } 00:23:20.276 Got JSON-RPC error response 00:23:20.276 response: 00:23:20.276 { 00:23:20.276 "code": -32603, 00:23:20.276 "message": "Internal error" 00:23:20.276 } 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- target/tls.sh@180 -- # killprocess 946323 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946323 ']' 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946323 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946323 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946323' 00:23:20.276 killing process with pid 946323 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946323 00:23:20.276 21:29:54 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946323 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.8mk28b471L 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=946613 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 946613 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946613 ']' 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:20.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:20.535 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.535 [2024-07-11 21:29:55.161637] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:20.535 [2024-07-11 21:29:55.161720] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:20.535 EAL: No free 2048 kB hugepages reported on node 1 00:23:20.535 [2024-07-11 21:29:55.233101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.793 [2024-07-11 21:29:55.322320] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:20.793 [2024-07-11 21:29:55.322384] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:20.793 [2024-07-11 21:29:55.322401] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:20.793 [2024-07-11 21:29:55.322414] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:20.793 [2024-07-11 21:29:55.322433] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:20.793 [2024-07-11 21:29:55.322466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:20.793 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:20.793 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:20.793 21:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:20.794 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:20.794 21:29:55 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.794 21:29:55 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:20.794 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.8mk28b471L 00:23:20.794 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8mk28b471L 00:23:20.794 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:21.052 [2024-07-11 21:29:55.672938] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:21.052 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:21.309 21:29:55 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:21.567 [2024-07-11 21:29:56.170252] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:21.567 [2024-07-11 21:29:56.170485] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:21.567 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:21.825 malloc0 00:23:21.825 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:22.083 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:22.341 [2024-07-11 21:29:56.895723] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=946802 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 946802 /var/tmp/bdevperf.sock 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 946802 ']' 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:22.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:22.341 21:29:56 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:22.341 [2024-07-11 21:29:56.959442] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:22.341 [2024-07-11 21:29:56.959531] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid946802 ] 00:23:22.341 EAL: No free 2048 kB hugepages reported on node 1 00:23:22.341 [2024-07-11 21:29:57.021817] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:22.341 [2024-07-11 21:29:57.110549] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:22.599 21:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:22.599 21:29:57 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:22.599 21:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:22.857 [2024-07-11 21:29:57.428350] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:22.857 [2024-07-11 21:29:57.428463] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:22.857 TLSTESTn1 00:23:22.857 21:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:23.115 21:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:23:23.115 "subsystems": [ 00:23:23.115 { 00:23:23.115 "subsystem": "keyring", 00:23:23.115 "config": [] 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "subsystem": "iobuf", 00:23:23.115 "config": [ 00:23:23.115 { 00:23:23.115 "method": "iobuf_set_options", 00:23:23.115 "params": { 00:23:23.115 "small_pool_count": 8192, 00:23:23.115 "large_pool_count": 1024, 00:23:23.115 "small_bufsize": 8192, 00:23:23.115 "large_bufsize": 135168 00:23:23.115 } 00:23:23.115 } 00:23:23.115 ] 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "subsystem": "sock", 00:23:23.115 "config": [ 00:23:23.115 { 00:23:23.115 "method": "sock_set_default_impl", 00:23:23.115 "params": { 00:23:23.115 "impl_name": "posix" 00:23:23.115 } 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "method": "sock_impl_set_options", 00:23:23.115 "params": { 00:23:23.115 "impl_name": "ssl", 00:23:23.115 "recv_buf_size": 4096, 00:23:23.115 "send_buf_size": 4096, 00:23:23.115 "enable_recv_pipe": true, 00:23:23.115 "enable_quickack": false, 00:23:23.115 "enable_placement_id": 0, 00:23:23.115 "enable_zerocopy_send_server": true, 00:23:23.115 "enable_zerocopy_send_client": false, 00:23:23.115 "zerocopy_threshold": 0, 00:23:23.115 "tls_version": 0, 00:23:23.115 "enable_ktls": false 00:23:23.115 } 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "method": "sock_impl_set_options", 00:23:23.115 "params": { 00:23:23.115 "impl_name": "posix", 00:23:23.115 "recv_buf_size": 2097152, 00:23:23.115 "send_buf_size": 2097152, 00:23:23.115 "enable_recv_pipe": true, 00:23:23.115 "enable_quickack": false, 00:23:23.115 "enable_placement_id": 0, 00:23:23.115 "enable_zerocopy_send_server": true, 00:23:23.115 "enable_zerocopy_send_client": false, 00:23:23.115 "zerocopy_threshold": 0, 00:23:23.115 "tls_version": 0, 00:23:23.115 "enable_ktls": false 00:23:23.115 } 00:23:23.115 } 00:23:23.115 ] 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "subsystem": "vmd", 00:23:23.115 "config": [] 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "subsystem": "accel", 00:23:23.115 "config": [ 00:23:23.115 { 00:23:23.115 "method": "accel_set_options", 00:23:23.115 "params": { 00:23:23.115 "small_cache_size": 128, 00:23:23.115 "large_cache_size": 16, 00:23:23.115 "task_count": 2048, 00:23:23.115 "sequence_count": 2048, 00:23:23.115 "buf_count": 2048 00:23:23.115 } 00:23:23.115 } 00:23:23.115 ] 00:23:23.115 }, 00:23:23.115 { 00:23:23.115 "subsystem": "bdev", 00:23:23.115 "config": [ 00:23:23.115 { 00:23:23.115 "method": "bdev_set_options", 00:23:23.115 "params": { 00:23:23.115 "bdev_io_pool_size": 65535, 00:23:23.115 "bdev_io_cache_size": 256, 00:23:23.115 "bdev_auto_examine": true, 00:23:23.116 "iobuf_small_cache_size": 128, 00:23:23.116 "iobuf_large_cache_size": 16 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "bdev_raid_set_options", 00:23:23.116 "params": { 00:23:23.116 "process_window_size_kb": 1024 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "bdev_iscsi_set_options", 00:23:23.116 "params": { 00:23:23.116 "timeout_sec": 30 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "bdev_nvme_set_options", 00:23:23.116 "params": { 00:23:23.116 "action_on_timeout": "none", 00:23:23.116 "timeout_us": 0, 00:23:23.116 "timeout_admin_us": 0, 00:23:23.116 "keep_alive_timeout_ms": 10000, 00:23:23.116 "arbitration_burst": 0, 00:23:23.116 "low_priority_weight": 0, 00:23:23.116 "medium_priority_weight": 0, 00:23:23.116 "high_priority_weight": 0, 00:23:23.116 "nvme_adminq_poll_period_us": 10000, 00:23:23.116 "nvme_ioq_poll_period_us": 0, 00:23:23.116 "io_queue_requests": 0, 00:23:23.116 "delay_cmd_submit": true, 00:23:23.116 "transport_retry_count": 4, 00:23:23.116 "bdev_retry_count": 3, 00:23:23.116 "transport_ack_timeout": 0, 00:23:23.116 "ctrlr_loss_timeout_sec": 0, 00:23:23.116 "reconnect_delay_sec": 0, 00:23:23.116 "fast_io_fail_timeout_sec": 0, 00:23:23.116 "disable_auto_failback": false, 00:23:23.116 "generate_uuids": false, 00:23:23.116 "transport_tos": 0, 00:23:23.116 "nvme_error_stat": false, 00:23:23.116 "rdma_srq_size": 0, 00:23:23.116 "io_path_stat": false, 00:23:23.116 "allow_accel_sequence": false, 00:23:23.116 "rdma_max_cq_size": 0, 00:23:23.116 "rdma_cm_event_timeout_ms": 0, 00:23:23.116 "dhchap_digests": [ 00:23:23.116 "sha256", 00:23:23.116 "sha384", 00:23:23.116 "sha512" 00:23:23.116 ], 00:23:23.116 "dhchap_dhgroups": [ 00:23:23.116 "null", 00:23:23.116 "ffdhe2048", 00:23:23.116 "ffdhe3072", 00:23:23.116 "ffdhe4096", 00:23:23.116 "ffdhe6144", 00:23:23.116 "ffdhe8192" 00:23:23.116 ] 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "bdev_nvme_set_hotplug", 00:23:23.116 "params": { 00:23:23.116 "period_us": 100000, 00:23:23.116 "enable": false 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "bdev_malloc_create", 00:23:23.116 "params": { 00:23:23.116 "name": "malloc0", 00:23:23.116 "num_blocks": 8192, 00:23:23.116 "block_size": 4096, 00:23:23.116 "physical_block_size": 4096, 00:23:23.116 "uuid": "5149002b-805e-4241-8c2f-7203d6e2a9b4", 00:23:23.116 "optimal_io_boundary": 0 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "bdev_wait_for_examine" 00:23:23.116 } 00:23:23.116 ] 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "subsystem": "nbd", 00:23:23.116 "config": [] 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "subsystem": "scheduler", 00:23:23.116 "config": [ 00:23:23.116 { 00:23:23.116 "method": "framework_set_scheduler", 00:23:23.116 "params": { 00:23:23.116 "name": "static" 00:23:23.116 } 00:23:23.116 } 00:23:23.116 ] 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "subsystem": "nvmf", 00:23:23.116 "config": [ 00:23:23.116 { 00:23:23.116 "method": "nvmf_set_config", 00:23:23.116 "params": { 00:23:23.116 "discovery_filter": "match_any", 00:23:23.116 "admin_cmd_passthru": { 00:23:23.116 "identify_ctrlr": false 00:23:23.116 } 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "nvmf_set_max_subsystems", 00:23:23.116 "params": { 00:23:23.116 "max_subsystems": 1024 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "nvmf_set_crdt", 00:23:23.116 "params": { 00:23:23.116 "crdt1": 0, 00:23:23.116 "crdt2": 0, 00:23:23.116 "crdt3": 0 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "nvmf_create_transport", 00:23:23.116 "params": { 00:23:23.116 "trtype": "TCP", 00:23:23.116 "max_queue_depth": 128, 00:23:23.116 "max_io_qpairs_per_ctrlr": 127, 00:23:23.116 "in_capsule_data_size": 4096, 00:23:23.116 "max_io_size": 131072, 00:23:23.116 "io_unit_size": 131072, 00:23:23.116 "max_aq_depth": 128, 00:23:23.116 "num_shared_buffers": 511, 00:23:23.116 "buf_cache_size": 4294967295, 00:23:23.116 "dif_insert_or_strip": false, 00:23:23.116 "zcopy": false, 00:23:23.116 "c2h_success": false, 00:23:23.116 "sock_priority": 0, 00:23:23.116 "abort_timeout_sec": 1, 00:23:23.116 "ack_timeout": 0, 00:23:23.116 "data_wr_pool_size": 0 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "nvmf_create_subsystem", 00:23:23.116 "params": { 00:23:23.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.116 "allow_any_host": false, 00:23:23.116 "serial_number": "SPDK00000000000001", 00:23:23.116 "model_number": "SPDK bdev Controller", 00:23:23.116 "max_namespaces": 10, 00:23:23.116 "min_cntlid": 1, 00:23:23.116 "max_cntlid": 65519, 00:23:23.116 "ana_reporting": false 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "nvmf_subsystem_add_host", 00:23:23.116 "params": { 00:23:23.116 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.116 "host": "nqn.2016-06.io.spdk:host1", 00:23:23.116 "psk": "/tmp/tmp.8mk28b471L" 00:23:23.116 } 00:23:23.116 }, 00:23:23.116 { 00:23:23.116 "method": "nvmf_subsystem_add_ns", 00:23:23.117 "params": { 00:23:23.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.117 "namespace": { 00:23:23.117 "nsid": 1, 00:23:23.117 "bdev_name": "malloc0", 00:23:23.117 "nguid": "5149002B805E42418C2F7203D6E2A9B4", 00:23:23.117 "uuid": "5149002b-805e-4241-8c2f-7203d6e2a9b4", 00:23:23.117 "no_auto_visible": false 00:23:23.117 } 00:23:23.117 } 00:23:23.117 }, 00:23:23.117 { 00:23:23.117 "method": "nvmf_subsystem_add_listener", 00:23:23.117 "params": { 00:23:23.117 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.117 "listen_address": { 00:23:23.117 "trtype": "TCP", 00:23:23.117 "adrfam": "IPv4", 00:23:23.117 "traddr": "10.0.0.2", 00:23:23.117 "trsvcid": "4420" 00:23:23.117 }, 00:23:23.117 "secure_channel": true 00:23:23.117 } 00:23:23.117 } 00:23:23.117 ] 00:23:23.117 } 00:23:23.117 ] 00:23:23.117 }' 00:23:23.117 21:29:57 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:23.376 21:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:23:23.376 "subsystems": [ 00:23:23.376 { 00:23:23.376 "subsystem": "keyring", 00:23:23.376 "config": [] 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "subsystem": "iobuf", 00:23:23.376 "config": [ 00:23:23.376 { 00:23:23.376 "method": "iobuf_set_options", 00:23:23.376 "params": { 00:23:23.376 "small_pool_count": 8192, 00:23:23.376 "large_pool_count": 1024, 00:23:23.376 "small_bufsize": 8192, 00:23:23.376 "large_bufsize": 135168 00:23:23.376 } 00:23:23.376 } 00:23:23.376 ] 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "subsystem": "sock", 00:23:23.376 "config": [ 00:23:23.376 { 00:23:23.376 "method": "sock_set_default_impl", 00:23:23.376 "params": { 00:23:23.376 "impl_name": "posix" 00:23:23.376 } 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "method": "sock_impl_set_options", 00:23:23.376 "params": { 00:23:23.376 "impl_name": "ssl", 00:23:23.376 "recv_buf_size": 4096, 00:23:23.376 "send_buf_size": 4096, 00:23:23.376 "enable_recv_pipe": true, 00:23:23.376 "enable_quickack": false, 00:23:23.376 "enable_placement_id": 0, 00:23:23.376 "enable_zerocopy_send_server": true, 00:23:23.376 "enable_zerocopy_send_client": false, 00:23:23.376 "zerocopy_threshold": 0, 00:23:23.376 "tls_version": 0, 00:23:23.376 "enable_ktls": false 00:23:23.376 } 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "method": "sock_impl_set_options", 00:23:23.376 "params": { 00:23:23.376 "impl_name": "posix", 00:23:23.376 "recv_buf_size": 2097152, 00:23:23.376 "send_buf_size": 2097152, 00:23:23.376 "enable_recv_pipe": true, 00:23:23.376 "enable_quickack": false, 00:23:23.376 "enable_placement_id": 0, 00:23:23.376 "enable_zerocopy_send_server": true, 00:23:23.376 "enable_zerocopy_send_client": false, 00:23:23.376 "zerocopy_threshold": 0, 00:23:23.376 "tls_version": 0, 00:23:23.376 "enable_ktls": false 00:23:23.376 } 00:23:23.376 } 00:23:23.376 ] 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "subsystem": "vmd", 00:23:23.376 "config": [] 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "subsystem": "accel", 00:23:23.376 "config": [ 00:23:23.376 { 00:23:23.376 "method": "accel_set_options", 00:23:23.376 "params": { 00:23:23.376 "small_cache_size": 128, 00:23:23.376 "large_cache_size": 16, 00:23:23.376 "task_count": 2048, 00:23:23.376 "sequence_count": 2048, 00:23:23.376 "buf_count": 2048 00:23:23.376 } 00:23:23.376 } 00:23:23.376 ] 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "subsystem": "bdev", 00:23:23.376 "config": [ 00:23:23.376 { 00:23:23.376 "method": "bdev_set_options", 00:23:23.376 "params": { 00:23:23.376 "bdev_io_pool_size": 65535, 00:23:23.376 "bdev_io_cache_size": 256, 00:23:23.376 "bdev_auto_examine": true, 00:23:23.376 "iobuf_small_cache_size": 128, 00:23:23.376 "iobuf_large_cache_size": 16 00:23:23.376 } 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "method": "bdev_raid_set_options", 00:23:23.376 "params": { 00:23:23.376 "process_window_size_kb": 1024 00:23:23.376 } 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "method": "bdev_iscsi_set_options", 00:23:23.376 "params": { 00:23:23.376 "timeout_sec": 30 00:23:23.376 } 00:23:23.376 }, 00:23:23.376 { 00:23:23.376 "method": "bdev_nvme_set_options", 00:23:23.376 "params": { 00:23:23.376 "action_on_timeout": "none", 00:23:23.376 "timeout_us": 0, 00:23:23.376 "timeout_admin_us": 0, 00:23:23.376 "keep_alive_timeout_ms": 10000, 00:23:23.376 "arbitration_burst": 0, 00:23:23.376 "low_priority_weight": 0, 00:23:23.376 "medium_priority_weight": 0, 00:23:23.376 "high_priority_weight": 0, 00:23:23.376 "nvme_adminq_poll_period_us": 10000, 00:23:23.376 "nvme_ioq_poll_period_us": 0, 00:23:23.376 "io_queue_requests": 512, 00:23:23.376 "delay_cmd_submit": true, 00:23:23.376 "transport_retry_count": 4, 00:23:23.376 "bdev_retry_count": 3, 00:23:23.376 "transport_ack_timeout": 0, 00:23:23.376 "ctrlr_loss_timeout_sec": 0, 00:23:23.376 "reconnect_delay_sec": 0, 00:23:23.376 "fast_io_fail_timeout_sec": 0, 00:23:23.376 "disable_auto_failback": false, 00:23:23.376 "generate_uuids": false, 00:23:23.376 "transport_tos": 0, 00:23:23.376 "nvme_error_stat": false, 00:23:23.376 "rdma_srq_size": 0, 00:23:23.376 "io_path_stat": false, 00:23:23.376 "allow_accel_sequence": false, 00:23:23.376 "rdma_max_cq_size": 0, 00:23:23.376 "rdma_cm_event_timeout_ms": 0, 00:23:23.376 "dhchap_digests": [ 00:23:23.376 "sha256", 00:23:23.376 "sha384", 00:23:23.376 "sha512" 00:23:23.376 ], 00:23:23.376 "dhchap_dhgroups": [ 00:23:23.376 "null", 00:23:23.376 "ffdhe2048", 00:23:23.376 "ffdhe3072", 00:23:23.376 "ffdhe4096", 00:23:23.376 "ffdhe6144", 00:23:23.376 "ffdhe8192" 00:23:23.376 ] 00:23:23.376 } 00:23:23.376 }, 00:23:23.377 { 00:23:23.377 "method": "bdev_nvme_attach_controller", 00:23:23.377 "params": { 00:23:23.377 "name": "TLSTEST", 00:23:23.377 "trtype": "TCP", 00:23:23.377 "adrfam": "IPv4", 00:23:23.377 "traddr": "10.0.0.2", 00:23:23.377 "trsvcid": "4420", 00:23:23.377 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.377 "prchk_reftag": false, 00:23:23.377 "prchk_guard": false, 00:23:23.377 "ctrlr_loss_timeout_sec": 0, 00:23:23.377 "reconnect_delay_sec": 0, 00:23:23.377 "fast_io_fail_timeout_sec": 0, 00:23:23.377 "psk": "/tmp/tmp.8mk28b471L", 00:23:23.377 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:23.377 "hdgst": false, 00:23:23.377 "ddgst": false 00:23:23.377 } 00:23:23.377 }, 00:23:23.377 { 00:23:23.377 "method": "bdev_nvme_set_hotplug", 00:23:23.377 "params": { 00:23:23.377 "period_us": 100000, 00:23:23.377 "enable": false 00:23:23.377 } 00:23:23.377 }, 00:23:23.377 { 00:23:23.377 "method": "bdev_wait_for_examine" 00:23:23.377 } 00:23:23.377 ] 00:23:23.377 }, 00:23:23.377 { 00:23:23.377 "subsystem": "nbd", 00:23:23.377 "config": [] 00:23:23.377 } 00:23:23.377 ] 00:23:23.377 }' 00:23:23.377 21:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@199 -- # killprocess 946802 00:23:23.377 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946802 ']' 00:23:23.377 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946802 00:23:23.377 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.377 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.377 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946802 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946802' 00:23:23.636 killing process with pid 946802 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946802 00:23:23.636 Received shutdown signal, test time was about 10.000000 seconds 00:23:23.636 00:23:23.636 Latency(us) 00:23:23.636 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:23.636 =================================================================================================================== 00:23:23.636 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:23.636 [2024-07-11 21:29:58.153474] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946802 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@200 -- # killprocess 946613 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 946613 ']' 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 946613 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 946613 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 946613' 00:23:23.636 killing process with pid 946613 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 946613 00:23:23.636 [2024-07-11 21:29:58.396657] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:23.636 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 946613 00:23:23.895 21:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:23.895 21:29:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:23.895 21:29:58 nvmf_tcp.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:23:23.895 "subsystems": [ 00:23:23.895 { 00:23:23.895 "subsystem": "keyring", 00:23:23.895 "config": [] 00:23:23.895 }, 00:23:23.895 { 00:23:23.895 "subsystem": "iobuf", 00:23:23.895 "config": [ 00:23:23.895 { 00:23:23.895 "method": "iobuf_set_options", 00:23:23.895 "params": { 00:23:23.895 "small_pool_count": 8192, 00:23:23.895 "large_pool_count": 1024, 00:23:23.895 "small_bufsize": 8192, 00:23:23.895 "large_bufsize": 135168 00:23:23.895 } 00:23:23.895 } 00:23:23.895 ] 00:23:23.895 }, 00:23:23.895 { 00:23:23.895 "subsystem": "sock", 00:23:23.895 "config": [ 00:23:23.895 { 00:23:23.895 "method": "sock_set_default_impl", 00:23:23.895 "params": { 00:23:23.895 "impl_name": "posix" 00:23:23.895 } 00:23:23.895 }, 00:23:23.895 { 00:23:23.895 "method": "sock_impl_set_options", 00:23:23.895 "params": { 00:23:23.895 "impl_name": "ssl", 00:23:23.895 "recv_buf_size": 4096, 00:23:23.895 "send_buf_size": 4096, 00:23:23.895 "enable_recv_pipe": true, 00:23:23.895 "enable_quickack": false, 00:23:23.895 "enable_placement_id": 0, 00:23:23.895 "enable_zerocopy_send_server": true, 00:23:23.895 "enable_zerocopy_send_client": false, 00:23:23.895 "zerocopy_threshold": 0, 00:23:23.895 "tls_version": 0, 00:23:23.895 "enable_ktls": false 00:23:23.895 } 00:23:23.895 }, 00:23:23.895 { 00:23:23.895 "method": "sock_impl_set_options", 00:23:23.895 "params": { 00:23:23.895 "impl_name": "posix", 00:23:23.895 "recv_buf_size": 2097152, 00:23:23.895 "send_buf_size": 2097152, 00:23:23.895 "enable_recv_pipe": true, 00:23:23.895 "enable_quickack": false, 00:23:23.895 "enable_placement_id": 0, 00:23:23.895 "enable_zerocopy_send_server": true, 00:23:23.895 "enable_zerocopy_send_client": false, 00:23:23.895 "zerocopy_threshold": 0, 00:23:23.895 "tls_version": 0, 00:23:23.895 "enable_ktls": false 00:23:23.895 } 00:23:23.895 } 00:23:23.895 ] 00:23:23.895 }, 00:23:23.895 { 00:23:23.895 "subsystem": "vmd", 00:23:23.895 "config": [] 00:23:23.895 }, 00:23:23.895 { 00:23:23.895 "subsystem": "accel", 00:23:23.895 "config": [ 00:23:23.895 { 00:23:23.895 "method": "accel_set_options", 00:23:23.895 "params": { 00:23:23.895 "small_cache_size": 128, 00:23:23.895 "large_cache_size": 16, 00:23:23.895 "task_count": 2048, 00:23:23.895 "sequence_count": 2048, 00:23:23.895 "buf_count": 2048 00:23:23.895 } 00:23:23.895 } 00:23:23.896 ] 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "subsystem": "bdev", 00:23:23.896 "config": [ 00:23:23.896 { 00:23:23.896 "method": "bdev_set_options", 00:23:23.896 "params": { 00:23:23.896 "bdev_io_pool_size": 65535, 00:23:23.896 "bdev_io_cache_size": 256, 00:23:23.896 "bdev_auto_examine": true, 00:23:23.896 "iobuf_small_cache_size": 128, 00:23:23.896 "iobuf_large_cache_size": 16 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "bdev_raid_set_options", 00:23:23.896 "params": { 00:23:23.896 "process_window_size_kb": 1024 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "bdev_iscsi_set_options", 00:23:23.896 "params": { 00:23:23.896 "timeout_sec": 30 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "bdev_nvme_set_options", 00:23:23.896 "params": { 00:23:23.896 "action_on_timeout": "none", 00:23:23.896 "timeout_us": 0, 00:23:23.896 "timeout_admin_us": 0, 00:23:23.896 "keep_alive_timeout_ms": 10000, 00:23:23.896 "arbitration_burst": 0, 00:23:23.896 "low_priority_weight": 0, 00:23:23.896 "medium_priority_weight": 0, 00:23:23.896 "high_priority_weight": 0, 00:23:23.896 "nvme_adminq_poll_period_us": 10000, 00:23:23.896 "nvme_ioq_poll_period_us": 0, 00:23:23.896 "io_queue_requests": 0, 00:23:23.896 "delay_cmd_submit": true, 00:23:23.896 "transport_retry_count": 4, 00:23:23.896 "bdev_retry_count": 3, 00:23:23.896 "transport_ack_timeout": 0, 00:23:23.896 "ctrlr_loss_timeout_sec": 0, 00:23:23.896 "reconnect_delay_sec": 0, 00:23:23.896 "fast_io_fail_timeout_sec": 0, 00:23:23.896 "disable_auto_failback": false, 00:23:23.896 "generate_uuids": false, 00:23:23.896 "transport_tos": 0, 00:23:23.896 "nvme_error_stat": false, 00:23:23.896 "rdma_srq_size": 0, 00:23:23.896 "io_path_stat": false, 00:23:23.896 "allow_accel_sequence": false, 00:23:23.896 "rdma_max_cq_size": 0, 00:23:23.896 "rdma_cm_event_timeout_ms": 0, 00:23:23.896 "dhchap_digests": [ 00:23:23.896 "sha256", 00:23:23.896 "sha384", 00:23:23.896 "sha512" 00:23:23.896 ], 00:23:23.896 "dhchap_dhgroups": [ 00:23:23.896 "null", 00:23:23.896 "ffdhe2048", 00:23:23.896 "ffdhe3072", 00:23:23.896 "ffdhe4096", 00:23:23.896 "ffdhe6144", 00:23:23.896 "ffdhe8192" 00:23:23.896 ] 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "bdev_nvme_set_hotplug", 00:23:23.896 "params": { 00:23:23.896 "period_us": 100000, 00:23:23.896 "enable": false 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "bdev_malloc_create", 00:23:23.896 "params": { 00:23:23.896 "name": "malloc0", 00:23:23.896 "num_blocks": 8192, 00:23:23.896 "block_size": 4096, 00:23:23.896 "physical_block_size": 4096, 00:23:23.896 "uuid": "5149002b-805e-4241-8c2f-7203d6e2a9b4", 00:23:23.896 "optimal_io_boundary": 0 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "bdev_wait_for_examine" 00:23:23.896 } 00:23:23.896 ] 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "subsystem": "nbd", 00:23:23.896 "config": [] 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "subsystem": "scheduler", 00:23:23.896 "config": [ 00:23:23.896 { 00:23:23.896 "method": "framework_set_scheduler", 00:23:23.896 "params": { 00:23:23.896 "name": "static" 00:23:23.896 } 00:23:23.896 } 00:23:23.896 ] 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "subsystem": "nvmf", 00:23:23.896 "config": [ 00:23:23.896 { 00:23:23.896 "method": "nvmf_set_config", 00:23:23.896 "params": { 00:23:23.896 "discovery_filter": "match_any", 00:23:23.896 "admin_cmd_passthru": { 00:23:23.896 "identify_ctrlr": false 00:23:23.896 } 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_set_max_subsystems", 00:23:23.896 "params": { 00:23:23.896 "max_subsystems": 1024 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_set_crdt", 00:23:23.896 "params": { 00:23:23.896 "crdt1": 0, 00:23:23.896 "crdt2": 0, 00:23:23.896 "crdt3": 0 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_create_transport", 00:23:23.896 "params": { 00:23:23.896 "trtype": "TCP", 00:23:23.896 "max_queue_depth": 128, 00:23:23.896 "max_io_qpairs_per_ctrlr": 127, 00:23:23.896 "in_capsule_data_size": 4096, 00:23:23.896 "max_io_size": 131072, 00:23:23.896 "io_unit_size": 131072, 00:23:23.896 "max_aq_depth": 128, 00:23:23.896 "num_shared_buffers": 511, 00:23:23.896 "buf_cache_size": 4294967295, 00:23:23.896 "dif_insert_or_strip": false, 00:23:23.896 "zcopy": false, 00:23:23.896 "c2h_success": false, 00:23:23.896 "sock_priority": 0, 00:23:23.896 "abort_timeout_sec": 1, 00:23:23.896 "ack_timeout": 0, 00:23:23.896 "data_wr_pool_size": 0 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_create_subsystem", 00:23:23.896 "params": { 00:23:23.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.896 "allow_any_host": false, 00:23:23.896 "serial_number": "SPDK00000000000001", 00:23:23.896 "model_number": "SPDK bdev Controller", 00:23:23.896 "max_namespaces": 10, 00:23:23.896 "min_cntlid": 1, 00:23:23.896 "max_cntlid": 65519, 00:23:23.896 "ana_reporting": false 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_subsystem_add_host", 00:23:23.896 "params": { 00:23:23.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.896 "host": "nqn.2016-06.io.spdk:host1", 00:23:23.896 "psk": "/tmp/tmp.8mk28b471L" 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_subsystem_add_ns", 00:23:23.896 "params": { 00:23:23.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.896 "namespace": { 00:23:23.896 "nsid": 1, 00:23:23.896 "bdev_name": "malloc0", 00:23:23.896 "nguid": "5149002B805E42418C2F7203D6E2A9B4", 00:23:23.896 "uuid": "5149002b-805e-4241-8c2f-7203d6e2a9b4", 00:23:23.896 "no_auto_visible": false 00:23:23.896 } 00:23:23.896 } 00:23:23.896 }, 00:23:23.896 { 00:23:23.896 "method": "nvmf_subsystem_add_listener", 00:23:23.896 "params": { 00:23:23.896 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:23.896 "listen_address": { 00:23:23.896 "trtype": "TCP", 00:23:23.896 "adrfam": "IPv4", 00:23:23.896 "traddr": "10.0.0.2", 00:23:23.896 "trsvcid": "4420" 00:23:23.896 }, 00:23:23.896 "secure_channel": true 00:23:23.896 } 00:23:23.896 } 00:23:23.896 ] 00:23:23.896 } 00:23:23.896 ] 00:23:23.896 }' 00:23:23.896 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:23.896 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:23.896 21:29:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=947051 00:23:23.896 21:29:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:23.896 21:29:58 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 947051 00:23:23.897 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 947051 ']' 00:23:23.897 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:23.897 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:23.897 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:23.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:23.897 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:23.897 21:29:58 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.155 [2024-07-11 21:29:58.699502] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:24.155 [2024-07-11 21:29:58.699588] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:24.155 EAL: No free 2048 kB hugepages reported on node 1 00:23:24.155 [2024-07-11 21:29:58.763640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:24.155 [2024-07-11 21:29:58.847234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:24.155 [2024-07-11 21:29:58.847287] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:24.155 [2024-07-11 21:29:58.847314] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:24.155 [2024-07-11 21:29:58.847325] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:24.155 [2024-07-11 21:29:58.847336] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:24.155 [2024-07-11 21:29:58.847414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:24.413 [2024-07-11 21:29:59.080717] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:24.413 [2024-07-11 21:29:59.096691] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:24.413 [2024-07-11 21:29:59.112759] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:24.413 [2024-07-11 21:29:59.130900] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=947201 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 947201 /var/tmp/bdevperf.sock 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 947201 ']' 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:24.979 21:29:59 nvmf_tcp.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:23:24.979 "subsystems": [ 00:23:24.979 { 00:23:24.979 "subsystem": "keyring", 00:23:24.979 "config": [] 00:23:24.979 }, 00:23:24.979 { 00:23:24.979 "subsystem": "iobuf", 00:23:24.979 "config": [ 00:23:24.979 { 00:23:24.979 "method": "iobuf_set_options", 00:23:24.979 "params": { 00:23:24.979 "small_pool_count": 8192, 00:23:24.979 "large_pool_count": 1024, 00:23:24.979 "small_bufsize": 8192, 00:23:24.979 "large_bufsize": 135168 00:23:24.979 } 00:23:24.979 } 00:23:24.979 ] 00:23:24.979 }, 00:23:24.979 { 00:23:24.979 "subsystem": "sock", 00:23:24.979 "config": [ 00:23:24.979 { 00:23:24.979 "method": "sock_set_default_impl", 00:23:24.980 "params": { 00:23:24.980 "impl_name": "posix" 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "sock_impl_set_options", 00:23:24.980 "params": { 00:23:24.980 "impl_name": "ssl", 00:23:24.980 "recv_buf_size": 4096, 00:23:24.980 "send_buf_size": 4096, 00:23:24.980 "enable_recv_pipe": true, 00:23:24.980 "enable_quickack": false, 00:23:24.980 "enable_placement_id": 0, 00:23:24.980 "enable_zerocopy_send_server": true, 00:23:24.980 "enable_zerocopy_send_client": false, 00:23:24.980 "zerocopy_threshold": 0, 00:23:24.980 "tls_version": 0, 00:23:24.980 "enable_ktls": false 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "sock_impl_set_options", 00:23:24.980 "params": { 00:23:24.980 "impl_name": "posix", 00:23:24.980 "recv_buf_size": 2097152, 00:23:24.980 "send_buf_size": 2097152, 00:23:24.980 "enable_recv_pipe": true, 00:23:24.980 "enable_quickack": false, 00:23:24.980 "enable_placement_id": 0, 00:23:24.980 "enable_zerocopy_send_server": true, 00:23:24.980 "enable_zerocopy_send_client": false, 00:23:24.980 "zerocopy_threshold": 0, 00:23:24.980 "tls_version": 0, 00:23:24.980 "enable_ktls": false 00:23:24.980 } 00:23:24.980 } 00:23:24.980 ] 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "subsystem": "vmd", 00:23:24.980 "config": [] 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "subsystem": "accel", 00:23:24.980 "config": [ 00:23:24.980 { 00:23:24.980 "method": "accel_set_options", 00:23:24.980 "params": { 00:23:24.980 "small_cache_size": 128, 00:23:24.980 "large_cache_size": 16, 00:23:24.980 "task_count": 2048, 00:23:24.980 "sequence_count": 2048, 00:23:24.980 "buf_count": 2048 00:23:24.980 } 00:23:24.980 } 00:23:24.980 ] 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "subsystem": "bdev", 00:23:24.980 "config": [ 00:23:24.980 { 00:23:24.980 "method": "bdev_set_options", 00:23:24.980 "params": { 00:23:24.980 "bdev_io_pool_size": 65535, 00:23:24.980 "bdev_io_cache_size": 256, 00:23:24.980 "bdev_auto_examine": true, 00:23:24.980 "iobuf_small_cache_size": 128, 00:23:24.980 "iobuf_large_cache_size": 16 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "bdev_raid_set_options", 00:23:24.980 "params": { 00:23:24.980 "process_window_size_kb": 1024 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "bdev_iscsi_set_options", 00:23:24.980 "params": { 00:23:24.980 "timeout_sec": 30 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "bdev_nvme_set_options", 00:23:24.980 "params": { 00:23:24.980 "action_on_timeout": "none", 00:23:24.980 "timeout_us": 0, 00:23:24.980 "timeout_admin_us": 0, 00:23:24.980 "keep_alive_timeout_ms": 10000, 00:23:24.980 "arbitration_burst": 0, 00:23:24.980 "low_priority_weight": 0, 00:23:24.980 "medium_priority_weight": 0, 00:23:24.980 "high_priority_weight": 0, 00:23:24.980 "nvme_adminq_poll_period_us": 10000, 00:23:24.980 "nvme_ioq_poll_period_us": 0, 00:23:24.980 "io_queue_requests": 512, 00:23:24.980 "delay_cmd_submit": true, 00:23:24.980 "transport_retry_count": 4, 00:23:24.980 "bdev_retry_count": 3, 00:23:24.980 "transport_ack_timeout": 0, 00:23:24.980 "ctrlr_loss_timeout_sec": 0, 00:23:24.980 "reconnect_delay_sec": 0, 00:23:24.980 "fast_io_fail_timeout_sec": 0, 00:23:24.980 "disable_auto_failback": false, 00:23:24.980 "generate_uuids": false, 00:23:24.980 "transport_tos": 0, 00:23:24.980 "nvme_error_stat": false, 00:23:24.980 "rdma_srq_size": 0, 00:23:24.980 "io_path_stat": false, 00:23:24.980 "allow_accel_sequence": false, 00:23:24.980 "rdma_max_cq_size": 0, 00:23:24.980 "rdma_cm_event_timeout_ms": 0, 00:23:24.980 "dhchap_digests": [ 00:23:24.980 "sha256", 00:23:24.980 "sha384", 00:23:24.980 "sha512" 00:23:24.980 ], 00:23:24.980 "dhchap_dhgroups": [ 00:23:24.980 "null", 00:23:24.980 "ffdhe2048", 00:23:24.980 "ffdhe3072", 00:23:24.980 "ffdhe4096", 00:23:24.980 "ffdhe6144", 00:23:24.980 "ffdhe8192" 00:23:24.980 ] 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "bdev_nvme_attach_controller", 00:23:24.980 "params": { 00:23:24.980 "name": "TLSTEST", 00:23:24.980 "trtype": "TCP", 00:23:24.980 "adrfam": "IPv4", 00:23:24.980 "traddr": "10.0.0.2", 00:23:24.980 "trsvcid": "4420", 00:23:24.980 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:24.980 "prchk_reftag": false, 00:23:24.980 "prchk_guard": false, 00:23:24.980 "ctrlr_loss_timeout_sec": 0, 00:23:24.980 "reconnect_delay_sec": 0, 00:23:24.980 "fast_io_fail_timeout_sec": 0, 00:23:24.980 "psk": "/tmp/tmp.8mk28b471L", 00:23:24.980 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:24.980 "hdgst": false, 00:23:24.980 "ddgst": false 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "bdev_nvme_set_hotplug", 00:23:24.980 "params": { 00:23:24.980 "period_us": 100000, 00:23:24.980 "enable": false 00:23:24.980 } 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "method": "bdev_wait_for_examine" 00:23:24.980 } 00:23:24.980 ] 00:23:24.980 }, 00:23:24.980 { 00:23:24.980 "subsystem": "nbd", 00:23:24.980 "config": [] 00:23:24.980 } 00:23:24.980 ] 00:23:24.980 }' 00:23:24.980 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:24.980 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:24.980 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:24.980 21:29:59 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:24.980 [2024-07-11 21:29:59.699542] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:24.980 [2024-07-11 21:29:59.699618] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid947201 ] 00:23:24.980 EAL: No free 2048 kB hugepages reported on node 1 00:23:25.239 [2024-07-11 21:29:59.757556] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:25.239 [2024-07-11 21:29:59.840726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:25.239 [2024-07-11 21:30:00.008996] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:25.239 [2024-07-11 21:30:00.009155] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:26.171 21:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:26.171 21:30:00 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:26.171 21:30:00 nvmf_tcp.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:26.171 Running I/O for 10 seconds... 00:23:36.179 00:23:36.179 Latency(us) 00:23:36.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.179 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:36.179 Verification LBA range: start 0x0 length 0x2000 00:23:36.179 TLSTESTn1 : 10.02 3253.46 12.71 0.00 0.00 39266.76 10243.03 55147.33 00:23:36.179 =================================================================================================================== 00:23:36.179 Total : 3253.46 12.71 0.00 0.00 39266.76 10243.03 55147.33 00:23:36.179 0 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- target/tls.sh@214 -- # killprocess 947201 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 947201 ']' 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 947201 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 947201 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 947201' 00:23:36.179 killing process with pid 947201 00:23:36.179 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 947201 00:23:36.179 Received shutdown signal, test time was about 10.000000 seconds 00:23:36.179 00:23:36.179 Latency(us) 00:23:36.179 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:36.179 =================================================================================================================== 00:23:36.180 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:36.180 [2024-07-11 21:30:10.845348] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:23:36.180 21:30:10 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 947201 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@215 -- # killprocess 947051 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 947051 ']' 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 947051 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 947051 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 947051' 00:23:36.438 killing process with pid 947051 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 947051 00:23:36.438 [2024-07-11 21:30:11.101408] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:36.438 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 947051 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=949154 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 949154 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 949154 ']' 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:36.696 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:36.696 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.696 [2024-07-11 21:30:11.392952] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:36.696 [2024-07-11 21:30:11.393046] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:36.696 EAL: No free 2048 kB hugepages reported on node 1 00:23:36.696 [2024-07-11 21:30:11.461265] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.955 [2024-07-11 21:30:11.551272] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:36.955 [2024-07-11 21:30:11.551335] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:36.955 [2024-07-11 21:30:11.551352] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:36.955 [2024-07-11 21:30:11.551366] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:36.955 [2024-07-11 21:30:11.551377] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:36.955 [2024-07-11 21:30:11.551407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.8mk28b471L 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.8mk28b471L 00:23:36.955 21:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:37.213 [2024-07-11 21:30:11.905588] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:37.213 21:30:11 nvmf_tcp.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:37.471 21:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:37.729 [2024-07-11 21:30:12.394861] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:37.729 [2024-07-11 21:30:12.395079] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:37.729 21:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:37.987 malloc0 00:23:37.987 21:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:38.246 21:30:12 nvmf_tcp.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.8mk28b471L 00:23:38.504 [2024-07-11 21:30:13.140658] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=949431 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 949431 /var/tmp/bdevperf.sock 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 949431 ']' 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:38.504 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.504 [2024-07-11 21:30:13.200312] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:38.504 [2024-07-11 21:30:13.200386] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949431 ] 00:23:38.504 EAL: No free 2048 kB hugepages reported on node 1 00:23:38.504 [2024-07-11 21:30:13.261896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.761 [2024-07-11 21:30:13.348329] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.761 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:38.761 21:30:13 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:38.761 21:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8mk28b471L 00:23:39.018 21:30:13 nvmf_tcp.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:39.275 [2024-07-11 21:30:13.927195] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.275 nvme0n1 00:23:39.275 21:30:14 nvmf_tcp.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.534 Running I/O for 1 seconds... 00:23:40.470 00:23:40.470 Latency(us) 00:23:40.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.470 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:40.470 Verification LBA range: start 0x0 length 0x2000 00:23:40.470 nvme0n1 : 1.03 3134.17 12.24 0.00 0.00 40344.72 9175.04 37865.24 00:23:40.470 =================================================================================================================== 00:23:40.470 Total : 3134.17 12.24 0.00 0.00 40344.72 9175.04 37865.24 00:23:40.470 0 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@234 -- # killprocess 949431 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 949431 ']' 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 949431 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949431 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949431' 00:23:40.470 killing process with pid 949431 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 949431 00:23:40.470 Received shutdown signal, test time was about 1.000000 seconds 00:23:40.470 00:23:40.470 Latency(us) 00:23:40.470 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.470 =================================================================================================================== 00:23:40.470 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:40.470 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 949431 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@235 -- # killprocess 949154 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 949154 ']' 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 949154 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949154 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949154' 00:23:40.730 killing process with pid 949154 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 949154 00:23:40.730 [2024-07-11 21:30:15.431106] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:23:40.730 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 949154 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- target/tls.sh@238 -- # nvmfappstart 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=949707 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 949707 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 949707 ']' 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:40.989 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:40.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:40.990 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:40.990 21:30:15 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:40.990 [2024-07-11 21:30:15.741683] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:40.990 [2024-07-11 21:30:15.741781] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:41.248 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.248 [2024-07-11 21:30:15.807318] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.248 [2024-07-11 21:30:15.897403] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:41.248 [2024-07-11 21:30:15.897463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:41.248 [2024-07-11 21:30:15.897480] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:41.248 [2024-07-11 21:30:15.897494] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:41.248 [2024-07-11 21:30:15.897505] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:41.248 [2024-07-11 21:30:15.897536] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:41.248 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.248 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:41.248 21:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:41.248 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:41.248 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@239 -- # rpc_cmd 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.507 [2024-07-11 21:30:16.049542] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:41.507 malloc0 00:23:41.507 [2024-07-11 21:30:16.082222] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:41.507 [2024-07-11 21:30:16.082490] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@252 -- # bdevperf_pid=949732 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@250 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@254 -- # waitforlisten 949732 /var/tmp/bdevperf.sock 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 949732 ']' 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:41.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:41.507 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:41.507 [2024-07-11 21:30:16.152538] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:41.507 [2024-07-11 21:30:16.152611] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid949732 ] 00:23:41.507 EAL: No free 2048 kB hugepages reported on node 1 00:23:41.507 [2024-07-11 21:30:16.214459] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:41.765 [2024-07-11 21:30:16.306498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:41.765 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:41.765 21:30:16 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:41.765 21:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.8mk28b471L 00:23:42.024 21:30:16 nvmf_tcp.nvmf_tls -- target/tls.sh@256 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:42.282 [2024-07-11 21:30:16.976386] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:42.282 nvme0n1 00:23:42.544 21:30:17 nvmf_tcp.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:42.544 Running I/O for 1 seconds... 00:23:43.479 00:23:43.479 Latency(us) 00:23:43.479 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.479 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:43.479 Verification LBA range: start 0x0 length 0x2000 00:23:43.479 nvme0n1 : 1.02 3182.05 12.43 0.00 0.00 39759.81 8446.86 44467.39 00:23:43.479 =================================================================================================================== 00:23:43.479 Total : 3182.05 12.43 0.00 0.00 39759.81 8446.86 44467.39 00:23:43.479 0 00:23:43.479 21:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # rpc_cmd save_config 00:23:43.479 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:43.479 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:43.737 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:43.737 21:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@263 -- # tgtcfg='{ 00:23:43.737 "subsystems": [ 00:23:43.737 { 00:23:43.737 "subsystem": "keyring", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "keyring_file_add_key", 00:23:43.737 "params": { 00:23:43.737 "name": "key0", 00:23:43.737 "path": "/tmp/tmp.8mk28b471L" 00:23:43.737 } 00:23:43.737 } 00:23:43.737 ] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "iobuf", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "iobuf_set_options", 00:23:43.737 "params": { 00:23:43.737 "small_pool_count": 8192, 00:23:43.737 "large_pool_count": 1024, 00:23:43.737 "small_bufsize": 8192, 00:23:43.737 "large_bufsize": 135168 00:23:43.737 } 00:23:43.737 } 00:23:43.737 ] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "sock", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "sock_set_default_impl", 00:23:43.737 "params": { 00:23:43.737 "impl_name": "posix" 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "sock_impl_set_options", 00:23:43.737 "params": { 00:23:43.737 "impl_name": "ssl", 00:23:43.737 "recv_buf_size": 4096, 00:23:43.737 "send_buf_size": 4096, 00:23:43.737 "enable_recv_pipe": true, 00:23:43.737 "enable_quickack": false, 00:23:43.737 "enable_placement_id": 0, 00:23:43.737 "enable_zerocopy_send_server": true, 00:23:43.737 "enable_zerocopy_send_client": false, 00:23:43.737 "zerocopy_threshold": 0, 00:23:43.737 "tls_version": 0, 00:23:43.737 "enable_ktls": false 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "sock_impl_set_options", 00:23:43.737 "params": { 00:23:43.737 "impl_name": "posix", 00:23:43.737 "recv_buf_size": 2097152, 00:23:43.737 "send_buf_size": 2097152, 00:23:43.737 "enable_recv_pipe": true, 00:23:43.737 "enable_quickack": false, 00:23:43.737 "enable_placement_id": 0, 00:23:43.737 "enable_zerocopy_send_server": true, 00:23:43.737 "enable_zerocopy_send_client": false, 00:23:43.737 "zerocopy_threshold": 0, 00:23:43.737 "tls_version": 0, 00:23:43.737 "enable_ktls": false 00:23:43.737 } 00:23:43.737 } 00:23:43.737 ] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "vmd", 00:23:43.737 "config": [] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "accel", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "accel_set_options", 00:23:43.737 "params": { 00:23:43.737 "small_cache_size": 128, 00:23:43.737 "large_cache_size": 16, 00:23:43.737 "task_count": 2048, 00:23:43.737 "sequence_count": 2048, 00:23:43.737 "buf_count": 2048 00:23:43.737 } 00:23:43.737 } 00:23:43.737 ] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "bdev", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "bdev_set_options", 00:23:43.737 "params": { 00:23:43.737 "bdev_io_pool_size": 65535, 00:23:43.737 "bdev_io_cache_size": 256, 00:23:43.737 "bdev_auto_examine": true, 00:23:43.737 "iobuf_small_cache_size": 128, 00:23:43.737 "iobuf_large_cache_size": 16 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "bdev_raid_set_options", 00:23:43.737 "params": { 00:23:43.737 "process_window_size_kb": 1024 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "bdev_iscsi_set_options", 00:23:43.737 "params": { 00:23:43.737 "timeout_sec": 30 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "bdev_nvme_set_options", 00:23:43.737 "params": { 00:23:43.737 "action_on_timeout": "none", 00:23:43.737 "timeout_us": 0, 00:23:43.737 "timeout_admin_us": 0, 00:23:43.737 "keep_alive_timeout_ms": 10000, 00:23:43.737 "arbitration_burst": 0, 00:23:43.737 "low_priority_weight": 0, 00:23:43.737 "medium_priority_weight": 0, 00:23:43.737 "high_priority_weight": 0, 00:23:43.737 "nvme_adminq_poll_period_us": 10000, 00:23:43.737 "nvme_ioq_poll_period_us": 0, 00:23:43.737 "io_queue_requests": 0, 00:23:43.737 "delay_cmd_submit": true, 00:23:43.737 "transport_retry_count": 4, 00:23:43.737 "bdev_retry_count": 3, 00:23:43.737 "transport_ack_timeout": 0, 00:23:43.737 "ctrlr_loss_timeout_sec": 0, 00:23:43.737 "reconnect_delay_sec": 0, 00:23:43.737 "fast_io_fail_timeout_sec": 0, 00:23:43.737 "disable_auto_failback": false, 00:23:43.737 "generate_uuids": false, 00:23:43.737 "transport_tos": 0, 00:23:43.737 "nvme_error_stat": false, 00:23:43.737 "rdma_srq_size": 0, 00:23:43.737 "io_path_stat": false, 00:23:43.737 "allow_accel_sequence": false, 00:23:43.737 "rdma_max_cq_size": 0, 00:23:43.737 "rdma_cm_event_timeout_ms": 0, 00:23:43.737 "dhchap_digests": [ 00:23:43.737 "sha256", 00:23:43.737 "sha384", 00:23:43.737 "sha512" 00:23:43.737 ], 00:23:43.737 "dhchap_dhgroups": [ 00:23:43.737 "null", 00:23:43.737 "ffdhe2048", 00:23:43.737 "ffdhe3072", 00:23:43.737 "ffdhe4096", 00:23:43.737 "ffdhe6144", 00:23:43.737 "ffdhe8192" 00:23:43.737 ] 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "bdev_nvme_set_hotplug", 00:23:43.737 "params": { 00:23:43.737 "period_us": 100000, 00:23:43.737 "enable": false 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "bdev_malloc_create", 00:23:43.737 "params": { 00:23:43.737 "name": "malloc0", 00:23:43.737 "num_blocks": 8192, 00:23:43.737 "block_size": 4096, 00:23:43.737 "physical_block_size": 4096, 00:23:43.737 "uuid": "e6792d78-94b7-485c-a7ca-15eb58a8712a", 00:23:43.737 "optimal_io_boundary": 0 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "bdev_wait_for_examine" 00:23:43.737 } 00:23:43.737 ] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "nbd", 00:23:43.737 "config": [] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "scheduler", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "framework_set_scheduler", 00:23:43.737 "params": { 00:23:43.737 "name": "static" 00:23:43.737 } 00:23:43.737 } 00:23:43.737 ] 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "subsystem": "nvmf", 00:23:43.737 "config": [ 00:23:43.737 { 00:23:43.737 "method": "nvmf_set_config", 00:23:43.737 "params": { 00:23:43.737 "discovery_filter": "match_any", 00:23:43.737 "admin_cmd_passthru": { 00:23:43.737 "identify_ctrlr": false 00:23:43.737 } 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "nvmf_set_max_subsystems", 00:23:43.737 "params": { 00:23:43.737 "max_subsystems": 1024 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "nvmf_set_crdt", 00:23:43.737 "params": { 00:23:43.737 "crdt1": 0, 00:23:43.737 "crdt2": 0, 00:23:43.737 "crdt3": 0 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "nvmf_create_transport", 00:23:43.737 "params": { 00:23:43.737 "trtype": "TCP", 00:23:43.737 "max_queue_depth": 128, 00:23:43.737 "max_io_qpairs_per_ctrlr": 127, 00:23:43.737 "in_capsule_data_size": 4096, 00:23:43.737 "max_io_size": 131072, 00:23:43.737 "io_unit_size": 131072, 00:23:43.737 "max_aq_depth": 128, 00:23:43.737 "num_shared_buffers": 511, 00:23:43.737 "buf_cache_size": 4294967295, 00:23:43.737 "dif_insert_or_strip": false, 00:23:43.737 "zcopy": false, 00:23:43.737 "c2h_success": false, 00:23:43.737 "sock_priority": 0, 00:23:43.737 "abort_timeout_sec": 1, 00:23:43.737 "ack_timeout": 0, 00:23:43.737 "data_wr_pool_size": 0 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "nvmf_create_subsystem", 00:23:43.737 "params": { 00:23:43.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.737 "allow_any_host": false, 00:23:43.737 "serial_number": "00000000000000000000", 00:23:43.737 "model_number": "SPDK bdev Controller", 00:23:43.737 "max_namespaces": 32, 00:23:43.737 "min_cntlid": 1, 00:23:43.737 "max_cntlid": 65519, 00:23:43.737 "ana_reporting": false 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "nvmf_subsystem_add_host", 00:23:43.737 "params": { 00:23:43.737 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.737 "host": "nqn.2016-06.io.spdk:host1", 00:23:43.737 "psk": "key0" 00:23:43.737 } 00:23:43.737 }, 00:23:43.737 { 00:23:43.737 "method": "nvmf_subsystem_add_ns", 00:23:43.737 "params": { 00:23:43.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.738 "namespace": { 00:23:43.738 "nsid": 1, 00:23:43.738 "bdev_name": "malloc0", 00:23:43.738 "nguid": "E6792D7894B7485CA7CA15EB58A8712A", 00:23:43.738 "uuid": "e6792d78-94b7-485c-a7ca-15eb58a8712a", 00:23:43.738 "no_auto_visible": false 00:23:43.738 } 00:23:43.738 } 00:23:43.738 }, 00:23:43.738 { 00:23:43.738 "method": "nvmf_subsystem_add_listener", 00:23:43.738 "params": { 00:23:43.738 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.738 "listen_address": { 00:23:43.738 "trtype": "TCP", 00:23:43.738 "adrfam": "IPv4", 00:23:43.738 "traddr": "10.0.0.2", 00:23:43.738 "trsvcid": "4420" 00:23:43.738 }, 00:23:43.738 "secure_channel": true 00:23:43.738 } 00:23:43.738 } 00:23:43.738 ] 00:23:43.738 } 00:23:43.738 ] 00:23:43.738 }' 00:23:43.738 21:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:43.995 21:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@264 -- # bperfcfg='{ 00:23:43.995 "subsystems": [ 00:23:43.995 { 00:23:43.995 "subsystem": "keyring", 00:23:43.995 "config": [ 00:23:43.995 { 00:23:43.995 "method": "keyring_file_add_key", 00:23:43.995 "params": { 00:23:43.995 "name": "key0", 00:23:43.996 "path": "/tmp/tmp.8mk28b471L" 00:23:43.996 } 00:23:43.996 } 00:23:43.996 ] 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "subsystem": "iobuf", 00:23:43.996 "config": [ 00:23:43.996 { 00:23:43.996 "method": "iobuf_set_options", 00:23:43.996 "params": { 00:23:43.996 "small_pool_count": 8192, 00:23:43.996 "large_pool_count": 1024, 00:23:43.996 "small_bufsize": 8192, 00:23:43.996 "large_bufsize": 135168 00:23:43.996 } 00:23:43.996 } 00:23:43.996 ] 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "subsystem": "sock", 00:23:43.996 "config": [ 00:23:43.996 { 00:23:43.996 "method": "sock_set_default_impl", 00:23:43.996 "params": { 00:23:43.996 "impl_name": "posix" 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "sock_impl_set_options", 00:23:43.996 "params": { 00:23:43.996 "impl_name": "ssl", 00:23:43.996 "recv_buf_size": 4096, 00:23:43.996 "send_buf_size": 4096, 00:23:43.996 "enable_recv_pipe": true, 00:23:43.996 "enable_quickack": false, 00:23:43.996 "enable_placement_id": 0, 00:23:43.996 "enable_zerocopy_send_server": true, 00:23:43.996 "enable_zerocopy_send_client": false, 00:23:43.996 "zerocopy_threshold": 0, 00:23:43.996 "tls_version": 0, 00:23:43.996 "enable_ktls": false 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "sock_impl_set_options", 00:23:43.996 "params": { 00:23:43.996 "impl_name": "posix", 00:23:43.996 "recv_buf_size": 2097152, 00:23:43.996 "send_buf_size": 2097152, 00:23:43.996 "enable_recv_pipe": true, 00:23:43.996 "enable_quickack": false, 00:23:43.996 "enable_placement_id": 0, 00:23:43.996 "enable_zerocopy_send_server": true, 00:23:43.996 "enable_zerocopy_send_client": false, 00:23:43.996 "zerocopy_threshold": 0, 00:23:43.996 "tls_version": 0, 00:23:43.996 "enable_ktls": false 00:23:43.996 } 00:23:43.996 } 00:23:43.996 ] 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "subsystem": "vmd", 00:23:43.996 "config": [] 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "subsystem": "accel", 00:23:43.996 "config": [ 00:23:43.996 { 00:23:43.996 "method": "accel_set_options", 00:23:43.996 "params": { 00:23:43.996 "small_cache_size": 128, 00:23:43.996 "large_cache_size": 16, 00:23:43.996 "task_count": 2048, 00:23:43.996 "sequence_count": 2048, 00:23:43.996 "buf_count": 2048 00:23:43.996 } 00:23:43.996 } 00:23:43.996 ] 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "subsystem": "bdev", 00:23:43.996 "config": [ 00:23:43.996 { 00:23:43.996 "method": "bdev_set_options", 00:23:43.996 "params": { 00:23:43.996 "bdev_io_pool_size": 65535, 00:23:43.996 "bdev_io_cache_size": 256, 00:23:43.996 "bdev_auto_examine": true, 00:23:43.996 "iobuf_small_cache_size": 128, 00:23:43.996 "iobuf_large_cache_size": 16 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_raid_set_options", 00:23:43.996 "params": { 00:23:43.996 "process_window_size_kb": 1024 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_iscsi_set_options", 00:23:43.996 "params": { 00:23:43.996 "timeout_sec": 30 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_nvme_set_options", 00:23:43.996 "params": { 00:23:43.996 "action_on_timeout": "none", 00:23:43.996 "timeout_us": 0, 00:23:43.996 "timeout_admin_us": 0, 00:23:43.996 "keep_alive_timeout_ms": 10000, 00:23:43.996 "arbitration_burst": 0, 00:23:43.996 "low_priority_weight": 0, 00:23:43.996 "medium_priority_weight": 0, 00:23:43.996 "high_priority_weight": 0, 00:23:43.996 "nvme_adminq_poll_period_us": 10000, 00:23:43.996 "nvme_ioq_poll_period_us": 0, 00:23:43.996 "io_queue_requests": 512, 00:23:43.996 "delay_cmd_submit": true, 00:23:43.996 "transport_retry_count": 4, 00:23:43.996 "bdev_retry_count": 3, 00:23:43.996 "transport_ack_timeout": 0, 00:23:43.996 "ctrlr_loss_timeout_sec": 0, 00:23:43.996 "reconnect_delay_sec": 0, 00:23:43.996 "fast_io_fail_timeout_sec": 0, 00:23:43.996 "disable_auto_failback": false, 00:23:43.996 "generate_uuids": false, 00:23:43.996 "transport_tos": 0, 00:23:43.996 "nvme_error_stat": false, 00:23:43.996 "rdma_srq_size": 0, 00:23:43.996 "io_path_stat": false, 00:23:43.996 "allow_accel_sequence": false, 00:23:43.996 "rdma_max_cq_size": 0, 00:23:43.996 "rdma_cm_event_timeout_ms": 0, 00:23:43.996 "dhchap_digests": [ 00:23:43.996 "sha256", 00:23:43.996 "sha384", 00:23:43.996 "sha512" 00:23:43.996 ], 00:23:43.996 "dhchap_dhgroups": [ 00:23:43.996 "null", 00:23:43.996 "ffdhe2048", 00:23:43.996 "ffdhe3072", 00:23:43.996 "ffdhe4096", 00:23:43.996 "ffdhe6144", 00:23:43.996 "ffdhe8192" 00:23:43.996 ] 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_nvme_attach_controller", 00:23:43.996 "params": { 00:23:43.996 "name": "nvme0", 00:23:43.996 "trtype": "TCP", 00:23:43.996 "adrfam": "IPv4", 00:23:43.996 "traddr": "10.0.0.2", 00:23:43.996 "trsvcid": "4420", 00:23:43.996 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:43.996 "prchk_reftag": false, 00:23:43.996 "prchk_guard": false, 00:23:43.996 "ctrlr_loss_timeout_sec": 0, 00:23:43.996 "reconnect_delay_sec": 0, 00:23:43.996 "fast_io_fail_timeout_sec": 0, 00:23:43.996 "psk": "key0", 00:23:43.996 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:43.996 "hdgst": false, 00:23:43.996 "ddgst": false 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_nvme_set_hotplug", 00:23:43.996 "params": { 00:23:43.996 "period_us": 100000, 00:23:43.996 "enable": false 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_enable_histogram", 00:23:43.996 "params": { 00:23:43.996 "name": "nvme0n1", 00:23:43.996 "enable": true 00:23:43.996 } 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "method": "bdev_wait_for_examine" 00:23:43.996 } 00:23:43.996 ] 00:23:43.996 }, 00:23:43.996 { 00:23:43.996 "subsystem": "nbd", 00:23:43.996 "config": [] 00:23:43.996 } 00:23:43.996 ] 00:23:43.996 }' 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@266 -- # killprocess 949732 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 949732 ']' 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 949732 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949732 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949732' 00:23:43.996 killing process with pid 949732 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 949732 00:23:43.996 Received shutdown signal, test time was about 1.000000 seconds 00:23:43.996 00:23:43.996 Latency(us) 00:23:43.996 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:43.996 =================================================================================================================== 00:23:43.996 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:43.996 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 949732 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- target/tls.sh@267 -- # killprocess 949707 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 949707 ']' 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 949707 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 949707 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 949707' 00:23:44.254 killing process with pid 949707 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 949707 00:23:44.254 21:30:18 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 949707 00:23:44.512 21:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # nvmfappstart -c /dev/fd/62 00:23:44.512 21:30:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:44.512 21:30:19 nvmf_tcp.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:23:44.512 "subsystems": [ 00:23:44.512 { 00:23:44.512 "subsystem": "keyring", 00:23:44.512 "config": [ 00:23:44.512 { 00:23:44.512 "method": "keyring_file_add_key", 00:23:44.512 "params": { 00:23:44.512 "name": "key0", 00:23:44.513 "path": "/tmp/tmp.8mk28b471L" 00:23:44.513 } 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "iobuf", 00:23:44.513 "config": [ 00:23:44.513 { 00:23:44.513 "method": "iobuf_set_options", 00:23:44.513 "params": { 00:23:44.513 "small_pool_count": 8192, 00:23:44.513 "large_pool_count": 1024, 00:23:44.513 "small_bufsize": 8192, 00:23:44.513 "large_bufsize": 135168 00:23:44.513 } 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "sock", 00:23:44.513 "config": [ 00:23:44.513 { 00:23:44.513 "method": "sock_set_default_impl", 00:23:44.513 "params": { 00:23:44.513 "impl_name": "posix" 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "sock_impl_set_options", 00:23:44.513 "params": { 00:23:44.513 "impl_name": "ssl", 00:23:44.513 "recv_buf_size": 4096, 00:23:44.513 "send_buf_size": 4096, 00:23:44.513 "enable_recv_pipe": true, 00:23:44.513 "enable_quickack": false, 00:23:44.513 "enable_placement_id": 0, 00:23:44.513 "enable_zerocopy_send_server": true, 00:23:44.513 "enable_zerocopy_send_client": false, 00:23:44.513 "zerocopy_threshold": 0, 00:23:44.513 "tls_version": 0, 00:23:44.513 "enable_ktls": false 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "sock_impl_set_options", 00:23:44.513 "params": { 00:23:44.513 "impl_name": "posix", 00:23:44.513 "recv_buf_size": 2097152, 00:23:44.513 "send_buf_size": 2097152, 00:23:44.513 "enable_recv_pipe": true, 00:23:44.513 "enable_quickack": false, 00:23:44.513 "enable_placement_id": 0, 00:23:44.513 "enable_zerocopy_send_server": true, 00:23:44.513 "enable_zerocopy_send_client": false, 00:23:44.513 "zerocopy_threshold": 0, 00:23:44.513 "tls_version": 0, 00:23:44.513 "enable_ktls": false 00:23:44.513 } 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "vmd", 00:23:44.513 "config": [] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "accel", 00:23:44.513 "config": [ 00:23:44.513 { 00:23:44.513 "method": "accel_set_options", 00:23:44.513 "params": { 00:23:44.513 "small_cache_size": 128, 00:23:44.513 "large_cache_size": 16, 00:23:44.513 "task_count": 2048, 00:23:44.513 "sequence_count": 2048, 00:23:44.513 "buf_count": 2048 00:23:44.513 } 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "bdev", 00:23:44.513 "config": [ 00:23:44.513 { 00:23:44.513 "method": "bdev_set_options", 00:23:44.513 "params": { 00:23:44.513 "bdev_io_pool_size": 65535, 00:23:44.513 "bdev_io_cache_size": 256, 00:23:44.513 "bdev_auto_examine": true, 00:23:44.513 "iobuf_small_cache_size": 128, 00:23:44.513 "iobuf_large_cache_size": 16 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "bdev_raid_set_options", 00:23:44.513 "params": { 00:23:44.513 "process_window_size_kb": 1024 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "bdev_iscsi_set_options", 00:23:44.513 "params": { 00:23:44.513 "timeout_sec": 30 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "bdev_nvme_set_options", 00:23:44.513 "params": { 00:23:44.513 "action_on_timeout": "none", 00:23:44.513 "timeout_us": 0, 00:23:44.513 "timeout_admin_us": 0, 00:23:44.513 "keep_alive_timeout_ms": 10000, 00:23:44.513 "arbitration_burst": 0, 00:23:44.513 "low_priority_weight": 0, 00:23:44.513 "medium_priority_weight": 0, 00:23:44.513 "high_priority_weight": 0, 00:23:44.513 "nvme_adminq_poll_period_us": 10000, 00:23:44.513 "nvme_ioq_poll_period_us": 0, 00:23:44.513 "io_queue_requests": 0, 00:23:44.513 "delay_cmd_submit": true, 00:23:44.513 "transport_retry_count": 4, 00:23:44.513 "bdev_retry_count": 3, 00:23:44.513 "transport_ack_timeout": 0, 00:23:44.513 "ctrlr_loss_timeout_sec": 0, 00:23:44.513 "reconnect_delay_sec": 0, 00:23:44.513 "fast_io_fail_timeout_sec": 0, 00:23:44.513 "disable_auto_failback": false, 00:23:44.513 "generate_uuids": false, 00:23:44.513 "transport_tos": 0, 00:23:44.513 "nvme_error_stat": false, 00:23:44.513 "rdma_srq_size": 0, 00:23:44.513 "io_path_stat": false, 00:23:44.513 "allow_accel_sequence": false, 00:23:44.513 "rdma_max_cq_size": 0, 00:23:44.513 "rdma_cm_event_timeout_ms": 0, 00:23:44.513 "dhchap_digests": [ 00:23:44.513 "sha256", 00:23:44.513 "sha384", 00:23:44.513 "sha512" 00:23:44.513 ], 00:23:44.513 "dhchap_dhgroups": [ 00:23:44.513 "null", 00:23:44.513 "ffdhe2048", 00:23:44.513 "ffdhe3072", 00:23:44.513 "ffdhe4096", 00:23:44.513 "ffdhe6144", 00:23:44.513 "ffdhe8192" 00:23:44.513 ] 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "bdev_nvme_set_hotplug", 00:23:44.513 "params": { 00:23:44.513 "period_us": 100000, 00:23:44.513 "enable": false 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "bdev_malloc_create", 00:23:44.513 "params": { 00:23:44.513 "name": "malloc0", 00:23:44.513 "num_blocks": 8192, 00:23:44.513 "block_size": 4096, 00:23:44.513 "physical_block_size": 4096, 00:23:44.513 "uuid": "e6792d78-94b7-485c-a7ca-15eb58a8712a", 00:23:44.513 "optimal_io_boundary": 0 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "bdev_wait_for_examine" 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "nbd", 00:23:44.513 "config": [] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "scheduler", 00:23:44.513 "config": [ 00:23:44.513 { 00:23:44.513 "method": "framework_set_scheduler", 00:23:44.513 "params": { 00:23:44.513 "name": "static" 00:23:44.513 } 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "subsystem": "nvmf", 00:23:44.513 "config": [ 00:23:44.513 { 00:23:44.513 "method": "nvmf_set_config", 00:23:44.513 "params": { 00:23:44.513 "discovery_filter": "match_any", 00:23:44.513 "admin_cmd_passthru": { 00:23:44.513 "identify_ctrlr": false 00:23:44.513 } 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_set_max_subsystems", 00:23:44.513 "params": { 00:23:44.513 "max_subsystems": 1024 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_set_crdt", 00:23:44.513 "params": { 00:23:44.513 "crdt1": 0, 00:23:44.513 "crdt2": 0, 00:23:44.513 "crdt3": 0 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_create_transport", 00:23:44.513 "params": { 00:23:44.513 "trtype": "TCP", 00:23:44.513 "max_queue_depth": 128, 00:23:44.513 "max_io_qpairs_per_ctrlr": 127, 00:23:44.513 "in_capsule_data_size": 4096, 00:23:44.513 "max_io_size": 131072, 00:23:44.513 "io_unit_size": 131072, 00:23:44.513 "max_aq_depth": 128, 00:23:44.513 "num_shared_buffers": 511, 00:23:44.513 "buf_cache_size": 4294967295, 00:23:44.513 "dif_insert_or_strip": false, 00:23:44.513 "zcopy": false, 00:23:44.513 "c2h_success": false, 00:23:44.513 "sock_priority": 0, 00:23:44.513 "abort_timeout_sec": 1, 00:23:44.513 "ack_timeout": 0, 00:23:44.513 "data_wr_pool_size": 0 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_create_subsystem", 00:23:44.513 "params": { 00:23:44.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.513 "allow_any_host": false, 00:23:44.513 "serial_number": "00000000000000000000", 00:23:44.513 "model_number": "SPDK bdev Controller", 00:23:44.513 "max_namespaces": 32, 00:23:44.513 "min_cntlid": 1, 00:23:44.513 "max_cntlid": 65519, 00:23:44.513 "ana_reporting": false 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_subsystem_add_host", 00:23:44.513 "params": { 00:23:44.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.513 "host": "nqn.2016-06.io.spdk:host1", 00:23:44.513 "psk": "key0" 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_subsystem_add_ns", 00:23:44.513 "params": { 00:23:44.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.513 "namespace": { 00:23:44.513 "nsid": 1, 00:23:44.513 "bdev_name": "malloc0", 00:23:44.513 "nguid": "E6792D7894B7485CA7CA15EB58A8712A", 00:23:44.513 "uuid": "e6792d78-94b7-485c-a7ca-15eb58a8712a", 00:23:44.513 "no_auto_visible": false 00:23:44.513 } 00:23:44.513 } 00:23:44.513 }, 00:23:44.513 { 00:23:44.513 "method": "nvmf_subsystem_add_listener", 00:23:44.513 "params": { 00:23:44.513 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:44.513 "listen_address": { 00:23:44.513 "trtype": "TCP", 00:23:44.513 "adrfam": "IPv4", 00:23:44.513 "traddr": "10.0.0.2", 00:23:44.513 "trsvcid": "4420" 00:23:44.513 }, 00:23:44.513 "secure_channel": true 00:23:44.513 } 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 } 00:23:44.513 ] 00:23:44.513 }' 00:23:44.513 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:44.513 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.513 21:30:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=950137 00:23:44.513 21:30:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 950137 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 950137 ']' 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:44.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:44.514 21:30:19 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:44.514 [2024-07-11 21:30:19.172825] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:44.514 [2024-07-11 21:30:19.172907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:44.514 EAL: No free 2048 kB hugepages reported on node 1 00:23:44.514 [2024-07-11 21:30:19.240623] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:44.772 [2024-07-11 21:30:19.331436] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:44.772 [2024-07-11 21:30:19.331496] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:44.772 [2024-07-11 21:30:19.331513] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:44.772 [2024-07-11 21:30:19.331527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:44.772 [2024-07-11 21:30:19.331538] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:44.772 [2024-07-11 21:30:19.331633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:45.030 [2024-07-11 21:30:19.576467] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:45.030 [2024-07-11 21:30:19.608478] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:45.030 [2024-07-11 21:30:19.617964] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@272 -- # bdevperf_pid=950296 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@273 -- # waitforlisten 950296 /var/tmp/bdevperf.sock 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 950296 ']' 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:45.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:45.596 21:30:20 nvmf_tcp.nvmf_tls -- target/tls.sh@270 -- # echo '{ 00:23:45.596 "subsystems": [ 00:23:45.596 { 00:23:45.596 "subsystem": "keyring", 00:23:45.596 "config": [ 00:23:45.596 { 00:23:45.596 "method": "keyring_file_add_key", 00:23:45.596 "params": { 00:23:45.596 "name": "key0", 00:23:45.596 "path": "/tmp/tmp.8mk28b471L" 00:23:45.596 } 00:23:45.596 } 00:23:45.596 ] 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "subsystem": "iobuf", 00:23:45.596 "config": [ 00:23:45.596 { 00:23:45.596 "method": "iobuf_set_options", 00:23:45.596 "params": { 00:23:45.596 "small_pool_count": 8192, 00:23:45.596 "large_pool_count": 1024, 00:23:45.596 "small_bufsize": 8192, 00:23:45.596 "large_bufsize": 135168 00:23:45.596 } 00:23:45.596 } 00:23:45.596 ] 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "subsystem": "sock", 00:23:45.596 "config": [ 00:23:45.596 { 00:23:45.596 "method": "sock_set_default_impl", 00:23:45.596 "params": { 00:23:45.596 "impl_name": "posix" 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "method": "sock_impl_set_options", 00:23:45.596 "params": { 00:23:45.596 "impl_name": "ssl", 00:23:45.596 "recv_buf_size": 4096, 00:23:45.596 "send_buf_size": 4096, 00:23:45.596 "enable_recv_pipe": true, 00:23:45.596 "enable_quickack": false, 00:23:45.596 "enable_placement_id": 0, 00:23:45.596 "enable_zerocopy_send_server": true, 00:23:45.596 "enable_zerocopy_send_client": false, 00:23:45.596 "zerocopy_threshold": 0, 00:23:45.596 "tls_version": 0, 00:23:45.596 "enable_ktls": false 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "method": "sock_impl_set_options", 00:23:45.596 "params": { 00:23:45.596 "impl_name": "posix", 00:23:45.596 "recv_buf_size": 2097152, 00:23:45.596 "send_buf_size": 2097152, 00:23:45.596 "enable_recv_pipe": true, 00:23:45.596 "enable_quickack": false, 00:23:45.596 "enable_placement_id": 0, 00:23:45.596 "enable_zerocopy_send_server": true, 00:23:45.596 "enable_zerocopy_send_client": false, 00:23:45.596 "zerocopy_threshold": 0, 00:23:45.596 "tls_version": 0, 00:23:45.596 "enable_ktls": false 00:23:45.596 } 00:23:45.596 } 00:23:45.596 ] 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "subsystem": "vmd", 00:23:45.596 "config": [] 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "subsystem": "accel", 00:23:45.596 "config": [ 00:23:45.596 { 00:23:45.596 "method": "accel_set_options", 00:23:45.596 "params": { 00:23:45.596 "small_cache_size": 128, 00:23:45.596 "large_cache_size": 16, 00:23:45.596 "task_count": 2048, 00:23:45.596 "sequence_count": 2048, 00:23:45.596 "buf_count": 2048 00:23:45.596 } 00:23:45.596 } 00:23:45.596 ] 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "subsystem": "bdev", 00:23:45.596 "config": [ 00:23:45.596 { 00:23:45.596 "method": "bdev_set_options", 00:23:45.596 "params": { 00:23:45.596 "bdev_io_pool_size": 65535, 00:23:45.596 "bdev_io_cache_size": 256, 00:23:45.596 "bdev_auto_examine": true, 00:23:45.596 "iobuf_small_cache_size": 128, 00:23:45.596 "iobuf_large_cache_size": 16 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "method": "bdev_raid_set_options", 00:23:45.596 "params": { 00:23:45.596 "process_window_size_kb": 1024 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "method": "bdev_iscsi_set_options", 00:23:45.596 "params": { 00:23:45.596 "timeout_sec": 30 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "method": "bdev_nvme_set_options", 00:23:45.596 "params": { 00:23:45.596 "action_on_timeout": "none", 00:23:45.596 "timeout_us": 0, 00:23:45.596 "timeout_admin_us": 0, 00:23:45.596 "keep_alive_timeout_ms": 10000, 00:23:45.596 "arbitration_burst": 0, 00:23:45.596 "low_priority_weight": 0, 00:23:45.596 "medium_priority_weight": 0, 00:23:45.596 "high_priority_weight": 0, 00:23:45.596 "nvme_adminq_poll_period_us": 10000, 00:23:45.596 "nvme_ioq_poll_period_us": 0, 00:23:45.596 "io_queue_requests": 512, 00:23:45.596 "delay_cmd_submit": true, 00:23:45.596 "transport_retry_count": 4, 00:23:45.596 "bdev_retry_count": 3, 00:23:45.596 "transport_ack_timeout": 0, 00:23:45.596 "ctrlr_loss_timeout_sec": 0, 00:23:45.596 "reconnect_delay_sec": 0, 00:23:45.596 "fast_io_fail_timeout_sec": 0, 00:23:45.596 "disable_auto_failback": false, 00:23:45.596 "generate_uuids": false, 00:23:45.596 "transport_tos": 0, 00:23:45.596 "nvme_error_stat": false, 00:23:45.596 "rdma_srq_size": 0, 00:23:45.596 "io_path_stat": false, 00:23:45.596 "allow_accel_sequence": false, 00:23:45.596 "rdma_max_cq_size": 0, 00:23:45.596 "rdma_cm_event_timeout_ms": 0, 00:23:45.596 "dhchap_digests": [ 00:23:45.596 "sha256", 00:23:45.596 "sha384", 00:23:45.596 "sha512" 00:23:45.596 ], 00:23:45.596 "dhchap_dhgroups": [ 00:23:45.596 "null", 00:23:45.596 "ffdhe2048", 00:23:45.596 "ffdhe3072", 00:23:45.596 "ffdhe4096", 00:23:45.596 "ffdhe6144", 00:23:45.596 "ffdhe8192" 00:23:45.596 ] 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.596 "method": "bdev_nvme_attach_controller", 00:23:45.596 "params": { 00:23:45.596 "name": "nvme0", 00:23:45.596 "trtype": "TCP", 00:23:45.596 "adrfam": "IPv4", 00:23:45.596 "traddr": "10.0.0.2", 00:23:45.596 "trsvcid": "4420", 00:23:45.596 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:45.596 "prchk_reftag": false, 00:23:45.596 "prchk_guard": false, 00:23:45.596 "ctrlr_loss_timeout_sec": 0, 00:23:45.596 "reconnect_delay_sec": 0, 00:23:45.596 "fast_io_fail_timeout_sec": 0, 00:23:45.596 "psk": "key0", 00:23:45.596 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:45.596 "hdgst": false, 00:23:45.596 "ddgst": false 00:23:45.596 } 00:23:45.596 }, 00:23:45.596 { 00:23:45.597 "method": "bdev_nvme_set_hotplug", 00:23:45.597 "params": { 00:23:45.597 "period_us": 100000, 00:23:45.597 "enable": false 00:23:45.597 } 00:23:45.597 }, 00:23:45.597 { 00:23:45.597 "method": "bdev_enable_histogram", 00:23:45.597 "params": { 00:23:45.597 "name": "nvme0n1", 00:23:45.597 "enable": true 00:23:45.597 } 00:23:45.597 }, 00:23:45.597 { 00:23:45.597 "method": "bdev_wait_for_examine" 00:23:45.597 } 00:23:45.597 ] 00:23:45.597 }, 00:23:45.597 { 00:23:45.597 "subsystem": "nbd", 00:23:45.597 "config": [] 00:23:45.597 } 00:23:45.597 ] 00:23:45.597 }' 00:23:45.597 [2024-07-11 21:30:20.214196] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:45.597 [2024-07-11 21:30:20.214286] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid950296 ] 00:23:45.597 EAL: No free 2048 kB hugepages reported on node 1 00:23:45.597 [2024-07-11 21:30:20.276385] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.855 [2024-07-11 21:30:20.368727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:45.855 [2024-07-11 21:30:20.543217] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:46.420 21:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:46.420 21:30:21 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:23:46.420 21:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:46.420 21:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # jq -r '.[].name' 00:23:46.677 21:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@275 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.677 21:30:21 nvmf_tcp.nvmf_tls -- target/tls.sh@276 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:46.935 Running I/O for 1 seconds... 00:23:47.870 00:23:47.870 Latency(us) 00:23:47.870 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:47.870 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:47.870 Verification LBA range: start 0x0 length 0x2000 00:23:47.870 nvme0n1 : 1.02 3308.97 12.93 0.00 0.00 38237.56 6699.24 35535.08 00:23:47.870 =================================================================================================================== 00:23:47.870 Total : 3308.97 12.93 0.00 0.00 38237.56 6699.24 35535.08 00:23:47.870 0 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@278 -- # trap - SIGINT SIGTERM EXIT 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@279 -- # cleanup 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:23:47.870 nvmf_trace.0 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@16 -- # killprocess 950296 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 950296 ']' 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 950296 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:47.870 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950296 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950296' 00:23:48.129 killing process with pid 950296 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 950296 00:23:48.129 Received shutdown signal, test time was about 1.000000 seconds 00:23:48.129 00:23:48.129 Latency(us) 00:23:48.129 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.129 =================================================================================================================== 00:23:48.129 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 950296 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.129 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.129 rmmod nvme_tcp 00:23:48.129 rmmod nvme_fabrics 00:23:48.390 rmmod nvme_keyring 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 950137 ']' 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 950137 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 950137 ']' 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 950137 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 950137 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 950137' 00:23:48.390 killing process with pid 950137 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@967 -- # kill 950137 00:23:48.390 21:30:22 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@972 -- # wait 950137 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:48.650 21:30:23 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.551 21:30:25 nvmf_tcp.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.551 21:30:25 nvmf_tcp.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.tpmWLOOrhX /tmp/tmp.RRWO9wf6ri /tmp/tmp.8mk28b471L 00:23:50.551 00:23:50.551 real 1m18.271s 00:23:50.551 user 2m7.435s 00:23:50.551 sys 0m25.207s 00:23:50.551 21:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:50.551 21:30:25 nvmf_tcp.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.551 ************************************ 00:23:50.551 END TEST nvmf_tls 00:23:50.551 ************************************ 00:23:50.551 21:30:25 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:23:50.551 21:30:25 nvmf_tcp -- nvmf/nvmf.sh@62 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:50.551 21:30:25 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:50.551 21:30:25 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:50.551 21:30:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:23:50.551 ************************************ 00:23:50.551 START TEST nvmf_fips 00:23:50.551 ************************************ 00:23:50.551 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:23:50.809 * Looking for test storage... 00:23:50.809 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:50.809 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@37 -- # cat 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@127 -- # : 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:23:50.810 Error setting digest 00:23:50.810 00F29E7F5E7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:23:50.810 00F29E7F5E7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:23:50.810 21:30:25 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:52.741 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:52.741 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:52.742 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:52.742 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:52.742 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:52.742 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.000 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.000 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:23:53.000 00:23:53.000 --- 10.0.0.2 ping statistics --- 00:23:53.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.000 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.000 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.000 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:23:53.000 00:23:53.000 --- 10.0.0.1 ping statistics --- 00:23:53.000 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.000 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=952541 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 952541 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 952541 ']' 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.000 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.001 [2024-07-11 21:30:27.662765] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:53.001 [2024-07-11 21:30:27.662881] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.001 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.001 [2024-07-11 21:30:27.728339] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.258 [2024-07-11 21:30:27.813122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.258 [2024-07-11 21:30:27.813178] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.258 [2024-07-11 21:30:27.813207] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.258 [2024-07-11 21:30:27.813219] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.258 [2024-07-11 21:30:27.813229] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.258 [2024-07-11 21:30:27.813260] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:53.259 21:30:27 nvmf_tcp.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:53.516 [2024-07-11 21:30:28.174974] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.516 [2024-07-11 21:30:28.190981] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:53.516 [2024-07-11 21:30:28.191244] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:53.516 [2024-07-11 21:30:28.223453] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:23:53.516 malloc0 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=952680 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 952680 /var/tmp/bdevperf.sock 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 952680 ']' 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:53.516 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.516 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:23:53.774 [2024-07-11 21:30:28.312843] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:23:53.774 [2024-07-11 21:30:28.312926] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid952680 ] 00:23:53.774 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.774 [2024-07-11 21:30:28.371113] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.774 [2024-07-11 21:30:28.455450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:54.031 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:54.031 21:30:28 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:23:54.031 21:30:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:23:54.031 [2024-07-11 21:30:28.788017] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:54.031 [2024-07-11 21:30:28.788179] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:23:54.288 TLSTESTn1 00:23:54.288 21:30:28 nvmf_tcp.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:54.288 Running I/O for 10 seconds... 00:24:06.477 00:24:06.477 Latency(us) 00:24:06.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.477 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:06.477 Verification LBA range: start 0x0 length 0x2000 00:24:06.477 TLSTESTn1 : 10.03 3430.28 13.40 0.00 0.00 37244.78 7912.87 39418.69 00:24:06.477 =================================================================================================================== 00:24:06.477 Total : 3430.28 13.40 0.00 0.00 37244.78 7912.87 39418.69 00:24:06.477 0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:06.477 nvmf_trace.0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@16 -- # killprocess 952680 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 952680 ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 952680 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 952680 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 952680' 00:24:06.477 killing process with pid 952680 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 952680 00:24:06.477 Received shutdown signal, test time was about 10.000000 seconds 00:24:06.477 00:24:06.477 Latency(us) 00:24:06.477 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:06.477 =================================================================================================================== 00:24:06.477 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:06.477 [2024-07-11 21:30:39.144915] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 952680 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:06.477 rmmod nvme_tcp 00:24:06.477 rmmod nvme_fabrics 00:24:06.477 rmmod nvme_keyring 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 952541 ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 952541 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 952541 ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 952541 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 952541 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:06.477 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 952541' 00:24:06.478 killing process with pid 952541 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@967 -- # kill 952541 00:24:06.478 [2024-07-11 21:30:39.449966] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@972 -- # wait 952541 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:06.478 21:30:39 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.046 21:30:41 nvmf_tcp.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:07.046 21:30:41 nvmf_tcp.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:24:07.046 00:24:07.046 real 0m16.466s 00:24:07.046 user 0m21.469s 00:24:07.046 sys 0m5.240s 00:24:07.046 21:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:07.046 21:30:41 nvmf_tcp.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:07.046 ************************************ 00:24:07.046 END TEST nvmf_fips 00:24:07.046 ************************************ 00:24:07.046 21:30:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:07.046 21:30:41 nvmf_tcp -- nvmf/nvmf.sh@65 -- # '[' 1 -eq 1 ']' 00:24:07.046 21:30:41 nvmf_tcp -- nvmf/nvmf.sh@66 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:07.046 21:30:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:07.046 21:30:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:07.046 21:30:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:07.046 ************************************ 00:24:07.046 START TEST nvmf_fuzz 00:24:07.046 ************************************ 00:24:07.046 21:30:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:07.304 * Looking for test storage... 00:24:07.304 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:07.304 21:30:41 nvmf_tcp.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@47 -- # : 0 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@285 -- # xtrace_disable 00:24:07.305 21:30:41 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # e810=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # x722=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # mlx=() 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:09.207 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:09.207 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:09.207 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.207 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:09.208 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.208 21:30:43 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.466 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.466 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.134 ms 00:24:09.466 00:24:09.466 --- 10.0.0.2 ping statistics --- 00:24:09.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.466 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.466 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.466 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:24:09.466 00:24:09.466 --- 10.0.0.1 ping statistics --- 00:24:09.466 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.466 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@422 -- # return 0 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=955930 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 955930 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@829 -- # '[' -z 955930 ']' 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.466 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@862 -- # return 0 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.724 Malloc0 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:09.724 21:30:44 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:24:41.783 Fuzzing completed. Shutting down the fuzz application 00:24:41.783 00:24:41.783 Dumping successful admin opcodes: 00:24:41.783 8, 9, 10, 24, 00:24:41.783 Dumping successful io opcodes: 00:24:41.783 0, 9, 00:24:41.783 NS: 0x200003aeff00 I/O qp, Total commands completed: 489723, total successful commands: 2822, random_seed: 2265137920 00:24:41.783 NS: 0x200003aeff00 admin qp, Total commands completed: 59504, total successful commands: 473, random_seed: 288725760 00:24:41.783 21:31:14 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:24:41.783 Fuzzing completed. Shutting down the fuzz application 00:24:41.783 00:24:41.783 Dumping successful admin opcodes: 00:24:41.783 24, 00:24:41.783 Dumping successful io opcodes: 00:24:41.783 00:24:41.783 NS: 0x200003aeff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 2072435565 00:24:41.783 NS: 0x200003aeff00 admin qp, Total commands completed: 16, total successful commands: 4, random_seed: 2072546640 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@117 -- # sync 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@120 -- # set +e 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.783 rmmod nvme_tcp 00:24:41.783 rmmod nvme_fabrics 00:24:41.783 rmmod nvme_keyring 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@124 -- # set -e 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@125 -- # return 0 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@489 -- # '[' -n 955930 ']' 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@490 -- # killprocess 955930 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@948 -- # '[' -z 955930 ']' 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@952 -- # kill -0 955930 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # uname 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 955930 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 955930' 00:24:41.783 killing process with pid 955930 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@967 -- # kill 955930 00:24:41.783 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@972 -- # wait 955930 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:42.041 21:31:16 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.940 21:31:18 nvmf_tcp.nvmf_fuzz -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.940 21:31:18 nvmf_tcp.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:24:44.198 00:24:44.198 real 0m36.931s 00:24:44.198 user 0m50.694s 00:24:44.198 sys 0m15.589s 00:24:44.198 21:31:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:44.198 21:31:18 nvmf_tcp.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:44.198 ************************************ 00:24:44.198 END TEST nvmf_fuzz 00:24:44.198 ************************************ 00:24:44.198 21:31:18 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:24:44.198 21:31:18 nvmf_tcp -- nvmf/nvmf.sh@67 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:44.198 21:31:18 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:44.198 21:31:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:44.198 21:31:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:24:44.198 ************************************ 00:24:44.198 START TEST nvmf_multiconnection 00:24:44.198 ************************************ 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:24:44.198 * Looking for test storage... 00:24:44.198 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.198 21:31:18 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@47 -- # : 0 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@285 -- # xtrace_disable 00:24:44.199 21:31:18 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # pci_devs=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # net_devs=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # e810=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@296 -- # local -ga e810 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # x722=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@297 -- # local -ga x722 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # mlx=() 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@298 -- # local -ga mlx 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:46.103 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:46.103 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:46.103 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:46.103 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@414 -- # is_hw=yes 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:46.103 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:46.104 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:46.104 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:46.104 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.111 ms 00:24:46.104 00:24:46.104 --- 10.0.0.2 ping statistics --- 00:24:46.104 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.104 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:46.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:46.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.075 ms 00:24:46.363 00:24:46.363 --- 10.0.0.1 ping statistics --- 00:24:46.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:46.363 rtt min/avg/max/mdev = 0.075/0.075/0.075/0.000 ms 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@422 -- # return 0 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@481 -- # nvmfpid=961549 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@482 -- # waitforlisten 961549 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@829 -- # '[' -z 961549 ']' 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:46.363 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:46.363 21:31:20 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.363 [2024-07-11 21:31:20.949278] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:24:46.363 [2024-07-11 21:31:20.949368] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:46.363 EAL: No free 2048 kB hugepages reported on node 1 00:24:46.363 [2024-07-11 21:31:21.018277] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:46.363 [2024-07-11 21:31:21.112502] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:46.363 [2024-07-11 21:31:21.112565] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:46.363 [2024-07-11 21:31:21.112592] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:46.363 [2024-07-11 21:31:21.112606] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:46.363 [2024-07-11 21:31:21.112619] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:46.363 [2024-07-11 21:31:21.115779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:46.363 [2024-07-11 21:31:21.115832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:46.363 [2024-07-11 21:31:21.115861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:46.363 [2024-07-11 21:31:21.115865] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@862 -- # return 0 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.629 [2024-07-11 21:31:21.258445] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.629 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 Malloc1 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 [2024-07-11 21:31:21.313276] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 Malloc2 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.630 Malloc3 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.630 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 Malloc4 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 Malloc5 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 Malloc6 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 Malloc7 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 Malloc8 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.920 Malloc9 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.920 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:24:46.921 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.921 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:46.921 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:46.921 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:24:46.921 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:46.921 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 Malloc10 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 Malloc11 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:47.180 21:31:21 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:24:47.746 21:31:22 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:24:47.746 21:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:47.746 21:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:47.746 21:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:47.746 21:31:22 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK1 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:49.642 21:31:24 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:24:50.574 21:31:25 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:24:50.574 21:31:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:50.574 21:31:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:50.574 21:31:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:50.574 21:31:25 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK2 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:52.471 21:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:24:53.036 21:31:27 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:24:53.036 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:53.036 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:53.036 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:53.036 21:31:27 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:54.932 21:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:54.932 21:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:54.932 21:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK3 00:24:55.190 21:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:55.190 21:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:55.190 21:31:29 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:55.190 21:31:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:55.190 21:31:29 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:24:55.759 21:31:30 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:24:55.759 21:31:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:55.759 21:31:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:55.759 21:31:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:55.759 21:31:30 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK4 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:24:58.281 21:31:32 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:24:58.845 21:31:33 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:24:58.845 21:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:24:58.845 21:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:24:58.845 21:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:24:58.845 21:31:33 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK5 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:00.738 21:31:35 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:01.669 21:31:36 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:01.669 21:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:01.669 21:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:01.669 21:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:01.669 21:31:36 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK6 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:03.584 21:31:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:04.148 21:31:38 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:04.148 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:04.148 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:04.148 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:04.148 21:31:38 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:06.127 21:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:06.127 21:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:06.127 21:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK7 00:25:06.384 21:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:06.384 21:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:06.384 21:31:40 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:06.384 21:31:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:06.384 21:31:40 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:06.951 21:31:41 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:06.951 21:31:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:06.951 21:31:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:06.951 21:31:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:06.951 21:31:41 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK8 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:08.857 21:31:43 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:09.837 21:31:44 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:09.837 21:31:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:09.837 21:31:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:09.837 21:31:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:09.837 21:31:44 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK9 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:12.361 21:31:46 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:25:12.618 21:31:47 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:25:12.618 21:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:12.618 21:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:12.618 21:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:12.618 21:31:47 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK10 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:15.144 21:31:49 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:25:15.710 21:31:50 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:25:15.710 21:31:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1198 -- # local i=0 00:25:15.710 21:31:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:15.710 21:31:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:15.710 21:31:50 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1205 -- # sleep 2 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # grep -c SPDK11 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1208 -- # return 0 00:25:17.609 21:31:52 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:25:17.870 [global] 00:25:17.870 thread=1 00:25:17.870 invalidate=1 00:25:17.870 rw=read 00:25:17.870 time_based=1 00:25:17.870 runtime=10 00:25:17.870 ioengine=libaio 00:25:17.870 direct=1 00:25:17.870 bs=262144 00:25:17.870 iodepth=64 00:25:17.870 norandommap=1 00:25:17.870 numjobs=1 00:25:17.870 00:25:17.870 [job0] 00:25:17.870 filename=/dev/nvme0n1 00:25:17.870 [job1] 00:25:17.870 filename=/dev/nvme10n1 00:25:17.870 [job2] 00:25:17.870 filename=/dev/nvme1n1 00:25:17.870 [job3] 00:25:17.870 filename=/dev/nvme2n1 00:25:17.870 [job4] 00:25:17.870 filename=/dev/nvme3n1 00:25:17.870 [job5] 00:25:17.870 filename=/dev/nvme4n1 00:25:17.870 [job6] 00:25:17.870 filename=/dev/nvme5n1 00:25:17.870 [job7] 00:25:17.870 filename=/dev/nvme6n1 00:25:17.870 [job8] 00:25:17.870 filename=/dev/nvme7n1 00:25:17.870 [job9] 00:25:17.870 filename=/dev/nvme8n1 00:25:17.870 [job10] 00:25:17.870 filename=/dev/nvme9n1 00:25:17.870 Could not set queue depth (nvme0n1) 00:25:17.870 Could not set queue depth (nvme10n1) 00:25:17.870 Could not set queue depth (nvme1n1) 00:25:17.870 Could not set queue depth (nvme2n1) 00:25:17.870 Could not set queue depth (nvme3n1) 00:25:17.870 Could not set queue depth (nvme4n1) 00:25:17.870 Could not set queue depth (nvme5n1) 00:25:17.870 Could not set queue depth (nvme6n1) 00:25:17.870 Could not set queue depth (nvme7n1) 00:25:17.870 Could not set queue depth (nvme8n1) 00:25:17.870 Could not set queue depth (nvme9n1) 00:25:18.128 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:18.128 fio-3.35 00:25:18.128 Starting 11 threads 00:25:30.337 00:25:30.337 job0: (groupid=0, jobs=1): err= 0: pid=965803: Thu Jul 11 21:32:03 2024 00:25:30.337 read: IOPS=799, BW=200MiB/s (210MB/s)(2032MiB/10165msec) 00:25:30.337 slat (usec): min=9, max=227807, avg=899.14, stdev=5384.80 00:25:30.337 clat (usec): min=1428, max=448645, avg=79076.15, stdev=69319.67 00:25:30.337 lat (usec): min=1460, max=448667, avg=79975.29, stdev=70270.63 00:25:30.337 clat percentiles (msec): 00:25:30.337 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 26], 00:25:30.337 | 30.00th=[ 34], 40.00th=[ 43], 50.00th=[ 56], 60.00th=[ 74], 00:25:30.337 | 70.00th=[ 94], 80.00th=[ 130], 90.00th=[ 169], 95.00th=[ 234], 00:25:30.337 | 99.00th=[ 305], 99.50th=[ 342], 99.90th=[ 393], 99.95th=[ 405], 00:25:30.337 | 99.99th=[ 447] 00:25:30.337 bw ( KiB/s): min=61952, max=437248, per=11.80%, avg=206416.60, stdev=113466.76, samples=20 00:25:30.337 iops : min= 242, max= 1708, avg=806.30, stdev=443.23, samples=20 00:25:30.337 lat (msec) : 2=0.15%, 4=0.95%, 10=3.59%, 20=10.17%, 50=31.41% 00:25:30.337 lat (msec) : 100=25.22%, 250=25.43%, 500=3.08% 00:25:30.337 cpu : usr=0.30%, sys=2.01%, ctx=1526, majf=0, minf=4097 00:25:30.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:30.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.337 issued rwts: total=8128,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.337 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.337 job1: (groupid=0, jobs=1): err= 0: pid=965804: Thu Jul 11 21:32:03 2024 00:25:30.337 read: IOPS=597, BW=149MiB/s (157MB/s)(1495MiB/10014msec) 00:25:30.337 slat (usec): min=8, max=157535, avg=939.02, stdev=6616.60 00:25:30.337 clat (usec): min=835, max=371262, avg=106176.40, stdev=76496.10 00:25:30.337 lat (usec): min=859, max=383924, avg=107115.42, stdev=77405.38 00:25:30.337 clat percentiles (usec): 00:25:30.337 | 1.00th=[ 1631], 5.00th=[ 11207], 10.00th=[ 17957], 20.00th=[ 39060], 00:25:30.337 | 30.00th=[ 51119], 40.00th=[ 69731], 50.00th=[ 88605], 60.00th=[108528], 00:25:30.337 | 70.00th=[132645], 80.00th=[187696], 90.00th=[219153], 95.00th=[250610], 00:25:30.337 | 99.00th=[299893], 99.50th=[308282], 99.90th=[320865], 99.95th=[341836], 00:25:30.337 | 99.99th=[371196] 00:25:30.337 bw ( KiB/s): min=78848, max=318464, per=8.66%, avg=151464.75, stdev=55713.32, samples=20 00:25:30.337 iops : min= 308, max= 1244, avg=591.65, stdev=217.64, samples=20 00:25:30.337 lat (usec) : 1000=0.05% 00:25:30.337 lat (msec) : 2=1.04%, 4=0.13%, 10=2.64%, 20=7.59%, 50=18.08% 00:25:30.337 lat (msec) : 100=26.25%, 250=39.20%, 500=5.02% 00:25:30.337 cpu : usr=0.28%, sys=1.54%, ctx=1263, majf=0, minf=4097 00:25:30.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:25:30.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.337 issued rwts: total=5980,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.337 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.337 job2: (groupid=0, jobs=1): err= 0: pid=965805: Thu Jul 11 21:32:03 2024 00:25:30.337 read: IOPS=1165, BW=291MiB/s (306MB/s)(2919MiB/10016msec) 00:25:30.337 slat (usec): min=11, max=183719, avg=822.94, stdev=3259.90 00:25:30.337 clat (msec): min=2, max=312, avg=54.04, stdev=33.74 00:25:30.337 lat (msec): min=3, max=385, avg=54.87, stdev=34.15 00:25:30.337 clat percentiles (msec): 00:25:30.337 | 1.00th=[ 21], 5.00th=[ 27], 10.00th=[ 29], 20.00th=[ 32], 00:25:30.337 | 30.00th=[ 34], 40.00th=[ 37], 50.00th=[ 44], 60.00th=[ 54], 00:25:30.337 | 70.00th=[ 61], 80.00th=[ 71], 90.00th=[ 90], 95.00th=[ 110], 00:25:30.337 | 99.00th=[ 213], 99.50th=[ 271], 99.90th=[ 313], 99.95th=[ 313], 00:25:30.337 | 99.99th=[ 313] 00:25:30.337 bw ( KiB/s): min=136704, max=470528, per=16.99%, avg=297238.85, stdev=108271.32, samples=20 00:25:30.337 iops : min= 534, max= 1838, avg=1161.05, stdev=422.94, samples=20 00:25:30.337 lat (msec) : 4=0.07%, 10=0.33%, 20=0.56%, 50=54.93%, 100=37.27% 00:25:30.337 lat (msec) : 250=6.30%, 500=0.56% 00:25:30.337 cpu : usr=0.61%, sys=3.53%, ctx=1871, majf=0, minf=4097 00:25:30.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:25:30.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.337 issued rwts: total=11675,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.337 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.337 job3: (groupid=0, jobs=1): err= 0: pid=965806: Thu Jul 11 21:32:03 2024 00:25:30.337 read: IOPS=478, BW=120MiB/s (126MB/s)(1217MiB/10170msec) 00:25:30.337 slat (usec): min=12, max=200761, avg=1910.21, stdev=9212.22 00:25:30.337 clat (usec): min=1769, max=350247, avg=131668.10, stdev=82167.23 00:25:30.337 lat (usec): min=1796, max=442696, avg=133578.31, stdev=83662.94 00:25:30.337 clat percentiles (msec): 00:25:30.337 | 1.00th=[ 7], 5.00th=[ 16], 10.00th=[ 27], 20.00th=[ 50], 00:25:30.337 | 30.00th=[ 72], 40.00th=[ 89], 50.00th=[ 114], 60.00th=[ 165], 00:25:30.337 | 70.00th=[ 188], 80.00th=[ 213], 90.00th=[ 239], 95.00th=[ 275], 00:25:30.337 | 99.00th=[ 305], 99.50th=[ 317], 99.90th=[ 326], 99.95th=[ 330], 00:25:30.337 | 99.99th=[ 351] 00:25:30.337 bw ( KiB/s): min=54784, max=227328, per=7.03%, avg=122986.25, stdev=58738.00, samples=20 00:25:30.337 iops : min= 214, max= 888, avg=480.40, stdev=229.42, samples=20 00:25:30.337 lat (msec) : 2=0.10%, 4=0.51%, 10=1.56%, 20=4.68%, 50=13.19% 00:25:30.337 lat (msec) : 100=26.21%, 250=46.23%, 500=7.52% 00:25:30.337 cpu : usr=0.20%, sys=1.58%, ctx=923, majf=0, minf=4097 00:25:30.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=4869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job4: (groupid=0, jobs=1): err= 0: pid=965807: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=734, BW=184MiB/s (193MB/s)(1867MiB/10167msec) 00:25:30.338 slat (usec): min=12, max=141709, avg=1223.48, stdev=5921.68 00:25:30.338 clat (usec): min=1499, max=408830, avg=85830.88, stdev=63178.29 00:25:30.338 lat (usec): min=1516, max=439323, avg=87054.35, stdev=64079.52 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 6], 5.00th=[ 10], 10.00th=[ 27], 20.00th=[ 36], 00:25:30.338 | 30.00th=[ 43], 40.00th=[ 56], 50.00th=[ 72], 60.00th=[ 83], 00:25:30.338 | 70.00th=[ 101], 80.00th=[ 129], 90.00th=[ 188], 95.00th=[ 213], 00:25:30.338 | 99.00th=[ 288], 99.50th=[ 305], 99.90th=[ 409], 99.95th=[ 409], 00:25:30.338 | 99.99th=[ 409] 00:25:30.338 bw ( KiB/s): min=67584, max=388096, per=10.84%, avg=189547.85, stdev=99785.26, samples=20 00:25:30.338 iops : min= 264, max= 1516, avg=740.40, stdev=389.78, samples=20 00:25:30.338 lat (msec) : 2=0.01%, 4=0.55%, 10=4.70%, 20=2.49%, 50=28.05% 00:25:30.338 lat (msec) : 100=34.20%, 250=28.15%, 500=1.85% 00:25:30.338 cpu : usr=0.39%, sys=2.28%, ctx=1302, majf=0, minf=4097 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=7468,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job5: (groupid=0, jobs=1): err= 0: pid=965808: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=544, BW=136MiB/s (143MB/s)(1390MiB/10204msec) 00:25:30.338 slat (usec): min=9, max=176419, avg=1303.07, stdev=6881.45 00:25:30.338 clat (msec): min=2, max=457, avg=116.08, stdev=80.06 00:25:30.338 lat (msec): min=2, max=457, avg=117.38, stdev=81.22 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 9], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 44], 00:25:30.338 | 30.00th=[ 56], 40.00th=[ 71], 50.00th=[ 97], 60.00th=[ 128], 00:25:30.338 | 70.00th=[ 163], 80.00th=[ 192], 90.00th=[ 232], 95.00th=[ 259], 00:25:30.338 | 99.00th=[ 309], 99.50th=[ 372], 99.90th=[ 405], 99.95th=[ 409], 00:25:30.338 | 99.99th=[ 460] 00:25:30.338 bw ( KiB/s): min=57740, max=283648, per=8.04%, avg=140679.00, stdev=71982.19, samples=20 00:25:30.338 iops : min= 225, max= 1108, avg=549.50, stdev=281.21, samples=20 00:25:30.338 lat (msec) : 4=0.41%, 10=0.85%, 20=5.27%, 50=18.96%, 100=25.78% 00:25:30.338 lat (msec) : 250=42.54%, 500=6.19% 00:25:30.338 cpu : usr=0.26%, sys=1.33%, ctx=1116, majf=0, minf=4097 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=5559,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job6: (groupid=0, jobs=1): err= 0: pid=965809: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=440, BW=110MiB/s (115MB/s)(1119MiB/10169msec) 00:25:30.338 slat (usec): min=13, max=194741, avg=2017.75, stdev=8049.95 00:25:30.338 clat (msec): min=2, max=439, avg=143.28, stdev=88.92 00:25:30.338 lat (msec): min=2, max=439, avg=145.30, stdev=90.42 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 6], 5.00th=[ 14], 10.00th=[ 26], 20.00th=[ 42], 00:25:30.338 | 30.00th=[ 74], 40.00th=[ 121], 50.00th=[ 157], 60.00th=[ 178], 00:25:30.338 | 70.00th=[ 201], 80.00th=[ 224], 90.00th=[ 257], 95.00th=[ 284], 00:25:30.338 | 99.00th=[ 334], 99.50th=[ 359], 99.90th=[ 388], 99.95th=[ 388], 00:25:30.338 | 99.99th=[ 439] 00:25:30.338 bw ( KiB/s): min=59273, max=254464, per=6.46%, avg=112941.25, stdev=64490.34, samples=20 00:25:30.338 iops : min= 231, max= 994, avg=441.15, stdev=251.94, samples=20 00:25:30.338 lat (msec) : 4=0.47%, 10=2.77%, 20=4.56%, 50=17.59%, 100=12.38% 00:25:30.338 lat (msec) : 250=50.50%, 500=11.73% 00:25:30.338 cpu : usr=0.28%, sys=1.46%, ctx=917, majf=0, minf=3721 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=4475,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job7: (groupid=0, jobs=1): err= 0: pid=965810: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=381, BW=95.4MiB/s (100MB/s)(968MiB/10143msec) 00:25:30.338 slat (usec): min=9, max=123779, avg=2339.50, stdev=8277.89 00:25:30.338 clat (msec): min=2, max=350, avg=165.18, stdev=78.03 00:25:30.338 lat (msec): min=2, max=375, avg=167.52, stdev=79.55 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 6], 5.00th=[ 11], 10.00th=[ 36], 20.00th=[ 103], 00:25:30.338 | 30.00th=[ 132], 40.00th=[ 159], 50.00th=[ 180], 60.00th=[ 199], 00:25:30.338 | 70.00th=[ 209], 80.00th=[ 232], 90.00th=[ 257], 95.00th=[ 279], 00:25:30.338 | 99.00th=[ 309], 99.50th=[ 313], 99.90th=[ 351], 99.95th=[ 351], 00:25:30.338 | 99.99th=[ 351] 00:25:30.338 bw ( KiB/s): min=59273, max=243200, per=5.57%, avg=97504.45, stdev=45670.62, samples=20 00:25:30.338 iops : min= 231, max= 950, avg=380.85, stdev=178.42, samples=20 00:25:30.338 lat (msec) : 4=0.21%, 10=4.57%, 20=2.69%, 50=4.44%, 100=7.52% 00:25:30.338 lat (msec) : 250=69.09%, 500=11.49% 00:25:30.338 cpu : usr=0.21%, sys=1.26%, ctx=766, majf=0, minf=4097 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=3872,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job8: (groupid=0, jobs=1): err= 0: pid=965811: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=754, BW=189MiB/s (198MB/s)(1919MiB/10168msec) 00:25:30.338 slat (usec): min=12, max=140924, avg=1100.93, stdev=6658.07 00:25:30.338 clat (usec): min=1386, max=389533, avg=83632.45, stdev=83843.42 00:25:30.338 lat (usec): min=1413, max=444828, avg=84733.38, stdev=85125.94 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 17], 20.00th=[ 26], 00:25:30.338 | 30.00th=[ 28], 40.00th=[ 30], 50.00th=[ 32], 60.00th=[ 46], 00:25:30.338 | 70.00th=[ 131], 80.00th=[ 180], 90.00th=[ 218], 95.00th=[ 243], 00:25:30.338 | 99.00th=[ 300], 99.50th=[ 313], 99.90th=[ 376], 99.95th=[ 376], 00:25:30.338 | 99.99th=[ 388] 00:25:30.338 bw ( KiB/s): min=57856, max=573828, per=11.14%, avg=194758.60, stdev=151795.07, samples=20 00:25:30.338 iops : min= 226, max= 2241, avg=760.75, stdev=592.88, samples=20 00:25:30.338 lat (msec) : 2=0.16%, 4=0.86%, 10=3.71%, 20=8.55%, 50=48.08% 00:25:30.338 lat (msec) : 100=6.87%, 250=28.21%, 500=3.56% 00:25:30.338 cpu : usr=0.34%, sys=2.35%, ctx=1509, majf=0, minf=4097 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=7674,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job9: (groupid=0, jobs=1): err= 0: pid=965812: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=614, BW=154MiB/s (161MB/s)(1561MiB/10166msec) 00:25:30.338 slat (usec): min=9, max=150014, avg=1305.05, stdev=6286.89 00:25:30.338 clat (usec): min=809, max=382554, avg=102833.73, stdev=76013.68 00:25:30.338 lat (usec): min=863, max=382585, avg=104138.77, stdev=77050.62 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 4], 5.00th=[ 13], 10.00th=[ 23], 20.00th=[ 36], 00:25:30.338 | 30.00th=[ 45], 40.00th=[ 59], 50.00th=[ 80], 60.00th=[ 107], 00:25:30.338 | 70.00th=[ 144], 80.00th=[ 186], 90.00th=[ 215], 95.00th=[ 234], 00:25:30.338 | 99.00th=[ 317], 99.50th=[ 330], 99.90th=[ 368], 99.95th=[ 372], 00:25:30.338 | 99.99th=[ 384] 00:25:30.338 bw ( KiB/s): min=63488, max=291840, per=9.04%, avg=158161.40, stdev=76974.40, samples=20 00:25:30.338 iops : min= 248, max= 1140, avg=617.80, stdev=300.67, samples=20 00:25:30.338 lat (usec) : 1000=0.06% 00:25:30.338 lat (msec) : 2=0.34%, 4=0.62%, 10=2.71%, 20=5.06%, 50=24.72% 00:25:30.338 lat (msec) : 100=25.10%, 250=38.77%, 500=2.61% 00:25:30.338 cpu : usr=0.30%, sys=1.86%, ctx=1198, majf=0, minf=4097 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=6242,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 job10: (groupid=0, jobs=1): err= 0: pid=965813: Thu Jul 11 21:32:03 2024 00:25:30.338 read: IOPS=370, BW=92.7MiB/s (97.2MB/s)(943MiB/10166msec) 00:25:30.338 slat (usec): min=13, max=220384, avg=2377.07, stdev=9912.64 00:25:30.338 clat (msec): min=6, max=459, avg=170.01, stdev=71.90 00:25:30.338 lat (msec): min=6, max=459, avg=172.39, stdev=73.51 00:25:30.338 clat percentiles (msec): 00:25:30.338 | 1.00th=[ 15], 5.00th=[ 40], 10.00th=[ 55], 20.00th=[ 101], 00:25:30.338 | 30.00th=[ 146], 40.00th=[ 161], 50.00th=[ 178], 60.00th=[ 199], 00:25:30.338 | 70.00th=[ 211], 80.00th=[ 228], 90.00th=[ 259], 95.00th=[ 279], 00:25:30.338 | 99.00th=[ 317], 99.50th=[ 321], 99.90th=[ 388], 99.95th=[ 388], 00:25:30.338 | 99.99th=[ 460] 00:25:30.338 bw ( KiB/s): min=52119, max=160768, per=5.43%, avg=94919.55, stdev=31614.44, samples=20 00:25:30.338 iops : min= 203, max= 628, avg=370.75, stdev=123.54, samples=20 00:25:30.338 lat (msec) : 10=0.27%, 20=1.46%, 50=6.47%, 100=12.01%, 250=67.06% 00:25:30.338 lat (msec) : 500=12.73% 00:25:30.338 cpu : usr=0.18%, sys=1.26%, ctx=715, majf=0, minf=4097 00:25:30.338 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.3% 00:25:30.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:30.338 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:30.338 issued rwts: total=3771,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:30.338 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:30.338 00:25:30.338 Run status group 0 (all jobs): 00:25:30.338 READ: bw=1708MiB/s (1791MB/s), 92.7MiB/s-291MiB/s (97.2MB/s-306MB/s), io=17.0GiB (18.3GB), run=10014-10204msec 00:25:30.338 00:25:30.338 Disk stats (read/write): 00:25:30.338 nvme0n1: ios=16066/0, merge=0/0, ticks=1227527/0, in_queue=1227527, util=97.28% 00:25:30.338 nvme10n1: ios=11686/0, merge=0/0, ticks=1247705/0, in_queue=1247705, util=97.48% 00:25:30.338 nvme1n1: ios=23113/0, merge=0/0, ticks=1241139/0, in_queue=1241139, util=97.75% 00:25:30.338 nvme2n1: ios=9556/0, merge=0/0, ticks=1226278/0, in_queue=1226278, util=97.88% 00:25:30.338 nvme3n1: ios=14809/0, merge=0/0, ticks=1232995/0, in_queue=1232995, util=97.95% 00:25:30.338 nvme4n1: ios=11117/0, merge=0/0, ticks=1264765/0, in_queue=1264765, util=98.32% 00:25:30.339 nvme5n1: ios=8808/0, merge=0/0, ticks=1226636/0, in_queue=1226636, util=98.44% 00:25:30.339 nvme6n1: ios=7362/0, merge=0/0, ticks=1224711/0, in_queue=1224711, util=98.53% 00:25:30.339 nvme7n1: ios=15220/0, merge=0/0, ticks=1229153/0, in_queue=1229153, util=98.93% 00:25:30.339 nvme8n1: ios=12335/0, merge=0/0, ticks=1229044/0, in_queue=1229044, util=99.11% 00:25:30.339 nvme9n1: ios=7385/0, merge=0/0, ticks=1227135/0, in_queue=1227135, util=99.23% 00:25:30.339 21:32:03 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:25:30.339 [global] 00:25:30.339 thread=1 00:25:30.339 invalidate=1 00:25:30.339 rw=randwrite 00:25:30.339 time_based=1 00:25:30.339 runtime=10 00:25:30.339 ioengine=libaio 00:25:30.339 direct=1 00:25:30.339 bs=262144 00:25:30.339 iodepth=64 00:25:30.339 norandommap=1 00:25:30.339 numjobs=1 00:25:30.339 00:25:30.339 [job0] 00:25:30.339 filename=/dev/nvme0n1 00:25:30.339 [job1] 00:25:30.339 filename=/dev/nvme10n1 00:25:30.339 [job2] 00:25:30.339 filename=/dev/nvme1n1 00:25:30.339 [job3] 00:25:30.339 filename=/dev/nvme2n1 00:25:30.339 [job4] 00:25:30.339 filename=/dev/nvme3n1 00:25:30.339 [job5] 00:25:30.339 filename=/dev/nvme4n1 00:25:30.339 [job6] 00:25:30.339 filename=/dev/nvme5n1 00:25:30.339 [job7] 00:25:30.339 filename=/dev/nvme6n1 00:25:30.339 [job8] 00:25:30.339 filename=/dev/nvme7n1 00:25:30.339 [job9] 00:25:30.339 filename=/dev/nvme8n1 00:25:30.339 [job10] 00:25:30.339 filename=/dev/nvme9n1 00:25:30.339 Could not set queue depth (nvme0n1) 00:25:30.339 Could not set queue depth (nvme10n1) 00:25:30.339 Could not set queue depth (nvme1n1) 00:25:30.339 Could not set queue depth (nvme2n1) 00:25:30.339 Could not set queue depth (nvme3n1) 00:25:30.339 Could not set queue depth (nvme4n1) 00:25:30.339 Could not set queue depth (nvme5n1) 00:25:30.339 Could not set queue depth (nvme6n1) 00:25:30.339 Could not set queue depth (nvme7n1) 00:25:30.339 Could not set queue depth (nvme8n1) 00:25:30.339 Could not set queue depth (nvme9n1) 00:25:30.339 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:25:30.339 fio-3.35 00:25:30.339 Starting 11 threads 00:25:40.323 00:25:40.323 job0: (groupid=0, jobs=1): err= 0: pid=966983: Thu Jul 11 21:32:14 2024 00:25:40.323 write: IOPS=634, BW=159MiB/s (166MB/s)(1599MiB/10082msec); 0 zone resets 00:25:40.323 slat (usec): min=16, max=67654, avg=1200.35, stdev=3277.82 00:25:40.323 clat (usec): min=841, max=281943, avg=99616.72, stdev=60148.88 00:25:40.323 lat (usec): min=879, max=281998, avg=100817.07, stdev=60935.25 00:25:40.323 clat percentiles (msec): 00:25:40.323 | 1.00th=[ 4], 5.00th=[ 17], 10.00th=[ 28], 20.00th=[ 45], 00:25:40.323 | 30.00th=[ 53], 40.00th=[ 78], 50.00th=[ 88], 60.00th=[ 107], 00:25:40.323 | 70.00th=[ 128], 80.00th=[ 161], 90.00th=[ 186], 95.00th=[ 207], 00:25:40.323 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 259], 00:25:40.323 | 99.99th=[ 284] 00:25:40.323 bw ( KiB/s): min=69632, max=377856, per=11.01%, avg=162110.55, stdev=74628.28, samples=20 00:25:40.323 iops : min= 272, max= 1476, avg=633.20, stdev=291.53, samples=20 00:25:40.323 lat (usec) : 1000=0.05% 00:25:40.323 lat (msec) : 2=0.14%, 4=0.95%, 10=2.17%, 20=2.91%, 50=21.31% 00:25:40.323 lat (msec) : 100=30.28%, 250=41.74%, 500=0.45% 00:25:40.323 cpu : usr=2.00%, sys=2.03%, ctx=3358, majf=0, minf=1 00:25:40.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:40.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.323 issued rwts: total=0,6397,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.323 job1: (groupid=0, jobs=1): err= 0: pid=966999: Thu Jul 11 21:32:14 2024 00:25:40.323 write: IOPS=475, BW=119MiB/s (125MB/s)(1202MiB/10121msec); 0 zone resets 00:25:40.323 slat (usec): min=23, max=110663, avg=2001.54, stdev=4169.49 00:25:40.323 clat (usec): min=1396, max=253013, avg=132517.40, stdev=41385.64 00:25:40.323 lat (msec): min=2, max=253, avg=134.52, stdev=41.83 00:25:40.323 clat percentiles (msec): 00:25:40.323 | 1.00th=[ 21], 5.00th=[ 87], 10.00th=[ 93], 20.00th=[ 99], 00:25:40.323 | 30.00th=[ 104], 40.00th=[ 115], 50.00th=[ 128], 60.00th=[ 138], 00:25:40.323 | 70.00th=[ 157], 80.00th=[ 174], 90.00th=[ 190], 95.00th=[ 199], 00:25:40.323 | 99.00th=[ 224], 99.50th=[ 232], 99.90th=[ 245], 99.95th=[ 245], 00:25:40.323 | 99.99th=[ 253] 00:25:40.323 bw ( KiB/s): min=79872, max=165376, per=8.25%, avg=121464.25, stdev=26419.02, samples=20 00:25:40.323 iops : min= 312, max= 646, avg=474.40, stdev=103.19, samples=20 00:25:40.323 lat (msec) : 2=0.04%, 4=0.06%, 10=0.15%, 20=0.75%, 50=1.98% 00:25:40.323 lat (msec) : 100=22.57%, 250=74.42%, 500=0.04% 00:25:40.323 cpu : usr=1.42%, sys=1.41%, ctx=1446, majf=0, minf=1 00:25:40.323 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.323 issued rwts: total=0,4808,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.323 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.323 job2: (groupid=0, jobs=1): err= 0: pid=967000: Thu Jul 11 21:32:14 2024 00:25:40.323 write: IOPS=472, BW=118MiB/s (124MB/s)(1199MiB/10151msec); 0 zone resets 00:25:40.323 slat (usec): min=23, max=71181, avg=1322.58, stdev=3741.06 00:25:40.323 clat (usec): min=1160, max=327279, avg=134102.62, stdev=61567.39 00:25:40.323 lat (usec): min=1204, max=327323, avg=135425.20, stdev=62349.44 00:25:40.323 clat percentiles (msec): 00:25:40.323 | 1.00th=[ 7], 5.00th=[ 31], 10.00th=[ 50], 20.00th=[ 83], 00:25:40.323 | 30.00th=[ 99], 40.00th=[ 110], 50.00th=[ 131], 60.00th=[ 159], 00:25:40.323 | 70.00th=[ 178], 80.00th=[ 192], 90.00th=[ 211], 95.00th=[ 228], 00:25:40.323 | 99.00th=[ 266], 99.50th=[ 279], 99.90th=[ 317], 99.95th=[ 317], 00:25:40.323 | 99.99th=[ 330] 00:25:40.323 bw ( KiB/s): min=77824, max=208896, per=8.22%, avg=121117.15, stdev=38553.22, samples=20 00:25:40.323 iops : min= 304, max= 816, avg=473.05, stdev=150.60, samples=20 00:25:40.323 lat (msec) : 2=0.21%, 4=0.31%, 10=1.00%, 20=2.17%, 50=6.55% 00:25:40.323 lat (msec) : 100=22.52%, 250=65.57%, 500=1.67% 00:25:40.323 cpu : usr=1.39%, sys=1.61%, ctx=2886, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.324 issued rwts: total=0,4795,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.324 job3: (groupid=0, jobs=1): err= 0: pid=967001: Thu Jul 11 21:32:14 2024 00:25:40.324 write: IOPS=468, BW=117MiB/s (123MB/s)(1190MiB/10156msec); 0 zone resets 00:25:40.324 slat (usec): min=15, max=82154, avg=1536.27, stdev=4212.83 00:25:40.324 clat (usec): min=877, max=333098, avg=134915.03, stdev=64740.95 00:25:40.324 lat (usec): min=916, max=333127, avg=136451.30, stdev=65576.19 00:25:40.324 clat percentiles (msec): 00:25:40.324 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 27], 20.00th=[ 81], 00:25:40.324 | 30.00th=[ 105], 40.00th=[ 124], 50.00th=[ 146], 60.00th=[ 163], 00:25:40.324 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 211], 95.00th=[ 224], 00:25:40.324 | 99.00th=[ 253], 99.50th=[ 268], 99.90th=[ 317], 99.95th=[ 317], 00:25:40.324 | 99.99th=[ 334] 00:25:40.324 bw ( KiB/s): min=77824, max=277504, per=8.16%, avg=120236.90, stdev=45359.43, samples=20 00:25:40.324 iops : min= 304, max= 1084, avg=469.60, stdev=177.12, samples=20 00:25:40.324 lat (usec) : 1000=0.06% 00:25:40.324 lat (msec) : 2=0.32%, 4=0.65%, 10=2.12%, 20=4.33%, 50=6.43% 00:25:40.324 lat (msec) : 100=14.60%, 250=70.32%, 500=1.18% 00:25:40.324 cpu : usr=1.45%, sys=1.44%, ctx=2613, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.324 issued rwts: total=0,4761,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.324 job4: (groupid=0, jobs=1): err= 0: pid=967002: Thu Jul 11 21:32:14 2024 00:25:40.324 write: IOPS=488, BW=122MiB/s (128MB/s)(1227MiB/10046msec); 0 zone resets 00:25:40.324 slat (usec): min=24, max=49362, avg=1860.95, stdev=4009.81 00:25:40.324 clat (msec): min=4, max=258, avg=129.08, stdev=59.42 00:25:40.324 lat (msec): min=4, max=258, avg=130.94, stdev=60.16 00:25:40.324 clat percentiles (msec): 00:25:40.324 | 1.00th=[ 24], 5.00th=[ 44], 10.00th=[ 48], 20.00th=[ 55], 00:25:40.324 | 30.00th=[ 91], 40.00th=[ 118], 50.00th=[ 130], 60.00th=[ 159], 00:25:40.324 | 70.00th=[ 174], 80.00th=[ 184], 90.00th=[ 201], 95.00th=[ 218], 00:25:40.324 | 99.00th=[ 245], 99.50th=[ 249], 99.90th=[ 259], 99.95th=[ 259], 00:25:40.324 | 99.99th=[ 259] 00:25:40.324 bw ( KiB/s): min=68608, max=303104, per=8.42%, avg=124024.35, stdev=58857.86, samples=20 00:25:40.324 iops : min= 268, max= 1184, avg=484.40, stdev=229.91, samples=20 00:25:40.324 lat (msec) : 10=0.20%, 20=0.61%, 50=11.92%, 100=19.07%, 250=67.83% 00:25:40.324 lat (msec) : 500=0.37% 00:25:40.324 cpu : usr=1.38%, sys=1.55%, ctx=1686, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.324 issued rwts: total=0,4908,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.324 job5: (groupid=0, jobs=1): err= 0: pid=967003: Thu Jul 11 21:32:14 2024 00:25:40.324 write: IOPS=499, BW=125MiB/s (131MB/s)(1269MiB/10150msec); 0 zone resets 00:25:40.324 slat (usec): min=15, max=77988, avg=1466.79, stdev=4137.46 00:25:40.324 clat (msec): min=3, max=344, avg=126.51, stdev=69.69 00:25:40.324 lat (msec): min=3, max=344, avg=127.97, stdev=70.67 00:25:40.324 clat percentiles (msec): 00:25:40.324 | 1.00th=[ 14], 5.00th=[ 31], 10.00th=[ 44], 20.00th=[ 71], 00:25:40.324 | 30.00th=[ 79], 40.00th=[ 88], 50.00th=[ 108], 60.00th=[ 133], 00:25:40.324 | 70.00th=[ 171], 80.00th=[ 199], 90.00th=[ 234], 95.00th=[ 249], 00:25:40.324 | 99.00th=[ 271], 99.50th=[ 279], 99.90th=[ 334], 99.95th=[ 334], 00:25:40.324 | 99.99th=[ 347] 00:25:40.324 bw ( KiB/s): min=58997, max=237568, per=8.71%, avg=128270.85, stdev=54968.61, samples=20 00:25:40.324 iops : min= 230, max= 928, avg=501.00, stdev=214.78, samples=20 00:25:40.324 lat (msec) : 4=0.06%, 10=0.45%, 20=1.46%, 50=9.93%, 100=34.69% 00:25:40.324 lat (msec) : 250=48.92%, 500=4.49% 00:25:40.324 cpu : usr=1.48%, sys=1.46%, ctx=2680, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.324 issued rwts: total=0,5074,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.324 job6: (groupid=0, jobs=1): err= 0: pid=967004: Thu Jul 11 21:32:14 2024 00:25:40.324 write: IOPS=547, BW=137MiB/s (144MB/s)(1385MiB/10120msec); 0 zone resets 00:25:40.324 slat (usec): min=20, max=124932, avg=1207.52, stdev=4812.79 00:25:40.324 clat (usec): min=1208, max=256837, avg=115306.37, stdev=63324.52 00:25:40.324 lat (usec): min=1263, max=259416, avg=116513.89, stdev=64022.90 00:25:40.324 clat percentiles (msec): 00:25:40.324 | 1.00th=[ 6], 5.00th=[ 22], 10.00th=[ 37], 20.00th=[ 66], 00:25:40.324 | 30.00th=[ 77], 40.00th=[ 85], 50.00th=[ 99], 60.00th=[ 126], 00:25:40.324 | 70.00th=[ 153], 80.00th=[ 184], 90.00th=[ 211], 95.00th=[ 228], 00:25:40.324 | 99.00th=[ 243], 99.50th=[ 249], 99.90th=[ 251], 99.95th=[ 255], 00:25:40.324 | 99.99th=[ 257] 00:25:40.324 bw ( KiB/s): min=71680, max=233984, per=9.52%, avg=140212.90, stdev=46643.52, samples=20 00:25:40.324 iops : min= 280, max= 914, avg=547.65, stdev=182.21, samples=20 00:25:40.324 lat (msec) : 2=0.20%, 4=0.42%, 10=1.30%, 20=2.78%, 50=10.45% 00:25:40.324 lat (msec) : 100=36.31%, 250=48.42%, 500=0.13% 00:25:40.324 cpu : usr=1.49%, sys=1.88%, ctx=3128, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.324 issued rwts: total=0,5541,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.324 job7: (groupid=0, jobs=1): err= 0: pid=967005: Thu Jul 11 21:32:14 2024 00:25:40.324 write: IOPS=435, BW=109MiB/s (114MB/s)(1106MiB/10153msec); 0 zone resets 00:25:40.324 slat (usec): min=22, max=54376, avg=1933.92, stdev=4368.46 00:25:40.324 clat (msec): min=2, max=339, avg=144.93, stdev=56.26 00:25:40.324 lat (msec): min=2, max=339, avg=146.86, stdev=57.07 00:25:40.324 clat percentiles (msec): 00:25:40.324 | 1.00th=[ 13], 5.00th=[ 42], 10.00th=[ 70], 20.00th=[ 103], 00:25:40.324 | 30.00th=[ 118], 40.00th=[ 130], 50.00th=[ 146], 60.00th=[ 163], 00:25:40.324 | 70.00th=[ 182], 80.00th=[ 194], 90.00th=[ 215], 95.00th=[ 230], 00:25:40.324 | 99.00th=[ 253], 99.50th=[ 275], 99.90th=[ 330], 99.95th=[ 330], 00:25:40.324 | 99.99th=[ 342] 00:25:40.324 bw ( KiB/s): min=67584, max=184832, per=7.58%, avg=111586.45, stdev=32350.69, samples=20 00:25:40.324 iops : min= 264, max= 722, avg=435.80, stdev=126.38, samples=20 00:25:40.324 lat (msec) : 4=0.09%, 10=0.77%, 20=0.95%, 50=5.16%, 100=12.05% 00:25:40.324 lat (msec) : 250=79.67%, 500=1.31% 00:25:40.324 cpu : usr=1.32%, sys=1.55%, ctx=1926, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.324 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.324 issued rwts: total=0,4422,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.324 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.324 job8: (groupid=0, jobs=1): err= 0: pid=967006: Thu Jul 11 21:32:14 2024 00:25:40.324 write: IOPS=483, BW=121MiB/s (127MB/s)(1229MiB/10163msec); 0 zone resets 00:25:40.324 slat (usec): min=15, max=61612, avg=1108.98, stdev=3696.83 00:25:40.324 clat (usec): min=1021, max=334878, avg=131095.36, stdev=70094.78 00:25:40.324 lat (usec): min=1050, max=334907, avg=132204.34, stdev=70805.21 00:25:40.324 clat percentiles (msec): 00:25:40.324 | 1.00th=[ 3], 5.00th=[ 7], 10.00th=[ 24], 20.00th=[ 63], 00:25:40.324 | 30.00th=[ 91], 40.00th=[ 120], 50.00th=[ 138], 60.00th=[ 159], 00:25:40.324 | 70.00th=[ 182], 80.00th=[ 197], 90.00th=[ 218], 95.00th=[ 232], 00:25:40.324 | 99.00th=[ 262], 99.50th=[ 275], 99.90th=[ 326], 99.95th=[ 326], 00:25:40.324 | 99.99th=[ 334] 00:25:40.324 bw ( KiB/s): min=70003, max=228296, per=8.43%, avg=124214.05, stdev=44450.69, samples=20 00:25:40.324 iops : min= 273, max= 891, avg=485.10, stdev=173.56, samples=20 00:25:40.324 lat (msec) : 2=0.65%, 4=2.03%, 10=3.56%, 20=3.01%, 50=7.87% 00:25:40.324 lat (msec) : 100=15.62%, 250=65.72%, 500=1.53% 00:25:40.324 cpu : usr=1.38%, sys=1.68%, ctx=3331, majf=0, minf=1 00:25:40.324 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:25:40.324 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.325 issued rwts: total=0,4916,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.325 job9: (groupid=0, jobs=1): err= 0: pid=967007: Thu Jul 11 21:32:14 2024 00:25:40.325 write: IOPS=634, BW=159MiB/s (166MB/s)(1599MiB/10082msec); 0 zone resets 00:25:40.325 slat (usec): min=18, max=28669, avg=1094.94, stdev=2721.99 00:25:40.325 clat (usec): min=1062, max=205631, avg=99716.98, stdev=47770.21 00:25:40.325 lat (usec): min=1146, max=208482, avg=100811.92, stdev=48228.19 00:25:40.325 clat percentiles (msec): 00:25:40.325 | 1.00th=[ 4], 5.00th=[ 16], 10.00th=[ 39], 20.00th=[ 52], 00:25:40.325 | 30.00th=[ 78], 40.00th=[ 88], 50.00th=[ 103], 60.00th=[ 115], 00:25:40.325 | 70.00th=[ 125], 80.00th=[ 140], 90.00th=[ 165], 95.00th=[ 182], 00:25:40.325 | 99.00th=[ 199], 99.50th=[ 201], 99.90th=[ 203], 99.95th=[ 205], 00:25:40.325 | 99.99th=[ 207] 00:25:40.325 bw ( KiB/s): min=90112, max=349696, per=11.00%, avg=162089.35, stdev=61471.67, samples=20 00:25:40.325 iops : min= 352, max= 1366, avg=633.05, stdev=240.09, samples=20 00:25:40.325 lat (msec) : 2=0.20%, 4=0.81%, 10=2.58%, 20=2.74%, 50=13.01% 00:25:40.325 lat (msec) : 100=29.07%, 250=51.59% 00:25:40.325 cpu : usr=1.65%, sys=2.20%, ctx=3312, majf=0, minf=1 00:25:40.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:25:40.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.325 issued rwts: total=0,6396,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.325 job10: (groupid=0, jobs=1): err= 0: pid=967008: Thu Jul 11 21:32:14 2024 00:25:40.325 write: IOPS=637, BW=159MiB/s (167MB/s)(1614MiB/10121msec); 0 zone resets 00:25:40.325 slat (usec): min=22, max=62932, avg=950.60, stdev=2902.63 00:25:40.325 clat (msec): min=4, max=272, avg=99.33, stdev=58.66 00:25:40.325 lat (msec): min=4, max=272, avg=100.28, stdev=59.25 00:25:40.325 clat percentiles (msec): 00:25:40.325 | 1.00th=[ 12], 5.00th=[ 29], 10.00th=[ 40], 20.00th=[ 44], 00:25:40.325 | 30.00th=[ 45], 40.00th=[ 66], 50.00th=[ 101], 60.00th=[ 118], 00:25:40.325 | 70.00th=[ 129], 80.00th=[ 148], 90.00th=[ 186], 95.00th=[ 211], 00:25:40.325 | 99.00th=[ 239], 99.50th=[ 253], 99.90th=[ 268], 99.95th=[ 271], 00:25:40.325 | 99.99th=[ 271] 00:25:40.325 bw ( KiB/s): min=87888, max=374784, per=11.11%, avg=163677.35, stdev=81156.25, samples=20 00:25:40.325 iops : min= 343, max= 1464, avg=639.25, stdev=317.08, samples=20 00:25:40.325 lat (msec) : 10=0.76%, 20=1.81%, 50=33.25%, 100=13.98%, 250=49.61% 00:25:40.325 lat (msec) : 500=0.59% 00:25:40.325 cpu : usr=1.80%, sys=2.14%, ctx=3592, majf=0, minf=1 00:25:40.325 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:25:40.325 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:25:40.325 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:25:40.325 issued rwts: total=0,6457,0,0 short=0,0,0,0 dropped=0,0,0,0 00:25:40.325 latency : target=0, window=0, percentile=100.00%, depth=64 00:25:40.325 00:25:40.325 Run status group 0 (all jobs): 00:25:40.325 WRITE: bw=1438MiB/s (1508MB/s), 109MiB/s-159MiB/s (114MB/s-167MB/s), io=14.3GiB (15.3GB), run=10046-10163msec 00:25:40.325 00:25:40.325 Disk stats (read/write): 00:25:40.325 nvme0n1: ios=48/12738, merge=0/0, ticks=1140/1233991, in_queue=1235131, util=99.90% 00:25:40.325 nvme10n1: ios=43/9538, merge=0/0, ticks=1627/1214391, in_queue=1216018, util=100.00% 00:25:40.325 nvme1n1: ios=42/9502, merge=0/0, ticks=880/1236127, in_queue=1237007, util=100.00% 00:25:40.325 nvme2n1: ios=0/9427, merge=0/0, ticks=0/1229756, in_queue=1229756, util=97.12% 00:25:40.325 nvme3n1: ios=33/9804, merge=0/0, ticks=1654/1229084, in_queue=1230738, util=100.00% 00:25:40.325 nvme4n1: ios=0/10065, merge=0/0, ticks=0/1229513, in_queue=1229513, util=97.98% 00:25:40.325 nvme5n1: ios=43/11005, merge=0/0, ticks=3697/1201065, in_queue=1204762, util=100.00% 00:25:40.325 nvme6n1: ios=37/8753, merge=0/0, ticks=1438/1222251, in_queue=1223689, util=100.00% 00:25:40.325 nvme7n1: ios=38/9727, merge=0/0, ticks=418/1236634, in_queue=1237052, util=100.00% 00:25:40.325 nvme8n1: ios=32/12735, merge=0/0, ticks=1017/1236722, in_queue=1237739, util=100.00% 00:25:40.325 nvme9n1: ios=0/12834, merge=0/0, ticks=0/1240143, in_queue=1240143, util=99.13% 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:25:40.325 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK1 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK1 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:25:40.325 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK2 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK2 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.325 21:32:14 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:25:40.583 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:25:40.583 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:25:40.583 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:40.583 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:40.583 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK3 00:25:40.840 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:40.840 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK3 00:25:40.840 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:40.840 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:25:40.841 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK4 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK4 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.841 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:25:41.098 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:25:41.098 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:25:41.098 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:41.098 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:41.098 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK5 00:25:41.098 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK5 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.099 21:32:15 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:25:41.357 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK6 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK6 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.357 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:25:41.617 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK7 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK7 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.617 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:25:41.875 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK8 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK8 00:25:41.875 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:25:41.876 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:41.876 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK9 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK9 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.134 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:25:42.135 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK10 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK10 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:25:42.135 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1219 -- # local i=0 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1220 -- # grep -q -w SPDK11 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1227 -- # grep -q -w SPDK11 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # return 0 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@117 -- # sync 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@120 -- # set +e 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:42.135 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:42.135 rmmod nvme_tcp 00:25:42.135 rmmod nvme_fabrics 00:25:42.393 rmmod nvme_keyring 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@124 -- # set -e 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@125 -- # return 0 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@489 -- # '[' -n 961549 ']' 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@490 -- # killprocess 961549 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@948 -- # '[' -z 961549 ']' 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@952 -- # kill -0 961549 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # uname 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 961549 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@966 -- # echo 'killing process with pid 961549' 00:25:42.393 killing process with pid 961549 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@967 -- # kill 961549 00:25:42.393 21:32:16 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@972 -- # wait 961549 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:42.960 21:32:17 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.911 21:32:19 nvmf_tcp.nvmf_multiconnection -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.911 00:25:44.911 real 1m0.726s 00:25:44.911 user 3m21.710s 00:25:44.911 sys 0m25.241s 00:25:44.911 21:32:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:44.911 21:32:19 nvmf_tcp.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:44.911 ************************************ 00:25:44.911 END TEST nvmf_multiconnection 00:25:44.911 ************************************ 00:25:44.911 21:32:19 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:25:44.911 21:32:19 nvmf_tcp -- nvmf/nvmf.sh@68 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:44.911 21:32:19 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:44.911 21:32:19 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:44.911 21:32:19 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:44.911 ************************************ 00:25:44.911 START TEST nvmf_initiator_timeout 00:25:44.911 ************************************ 00:25:44.911 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:25:44.911 * Looking for test storage... 00:25:44.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.911 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.911 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:25:44.911 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.911 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.911 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@47 -- # : 0 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.912 21:32:19 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # e810=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # x722=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # mlx=() 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.815 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.815 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.815 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.815 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.815 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.816 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.816 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:25:46.816 00:25:46.816 --- 10.0.0.2 ping statistics --- 00:25:46.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.816 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.816 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.816 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.063 ms 00:25:46.816 00:25:46.816 --- 10.0.0.1 ping statistics --- 00:25:46.816 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.816 rtt min/avg/max/mdev = 0.063/0.063/0.063/0.000 ms 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # return 0 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:46.816 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@481 -- # nvmfpid=970360 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # waitforlisten 970360 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@829 -- # '[' -z 970360 ']' 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:47.074 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.074 [2024-07-11 21:32:21.651279] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:25:47.074 [2024-07-11 21:32:21.651347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.074 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.074 [2024-07-11 21:32:21.719587] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:47.074 [2024-07-11 21:32:21.814177] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.074 [2024-07-11 21:32:21.814228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.074 [2024-07-11 21:32:21.814245] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.074 [2024-07-11 21:32:21.814258] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.074 [2024-07-11 21:32:21.814270] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.074 [2024-07-11 21:32:21.814361] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:47.074 [2024-07-11 21:32:21.814434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:47.074 [2024-07-11 21:32:21.814497] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:47.074 [2024-07-11 21:32:21.814500] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@862 -- # return 0 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 Malloc0 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.334 21:32:21 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 Delay0 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 [2024-07-11 21:32:22.008415] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:47.334 [2024-07-11 21:32:22.036670] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.334 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:48.272 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:25:48.272 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1198 -- # local i=0 00:25:48.272 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.272 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:25:48.272 21:32:22 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1205 -- # sleep 2 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1208 -- # return 0 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=970779 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:25:50.173 21:32:24 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:25:50.173 [global] 00:25:50.173 thread=1 00:25:50.173 invalidate=1 00:25:50.173 rw=write 00:25:50.173 time_based=1 00:25:50.173 runtime=60 00:25:50.173 ioengine=libaio 00:25:50.173 direct=1 00:25:50.173 bs=4096 00:25:50.173 iodepth=1 00:25:50.173 norandommap=0 00:25:50.173 numjobs=1 00:25:50.173 00:25:50.173 verify_dump=1 00:25:50.173 verify_backlog=512 00:25:50.173 verify_state_save=0 00:25:50.173 do_verify=1 00:25:50.173 verify=crc32c-intel 00:25:50.173 [job0] 00:25:50.173 filename=/dev/nvme0n1 00:25:50.173 Could not set queue depth (nvme0n1) 00:25:50.431 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:25:50.431 fio-3.35 00:25:50.431 Starting 1 thread 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.719 true 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.719 true 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.719 true 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:53.719 true 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.719 21:32:27 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.245 true 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.245 true 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.245 true 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:25:56.245 true 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:25:56.245 21:32:30 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 970779 00:26:52.480 00:26:52.480 job0: (groupid=0, jobs=1): err= 0: pid=970855: Thu Jul 11 21:33:25 2024 00:26:52.480 read: IOPS=16, BW=65.9KiB/s (67.5kB/s)(3956KiB/60011msec) 00:26:52.480 slat (usec): min=5, max=12928, avg=29.51, stdev=410.70 00:26:52.480 clat (usec): min=271, max=40977k, avg=60389.11, stdev=1302545.44 00:26:52.480 lat (usec): min=279, max=40977k, avg=60418.62, stdev=1302545.98 00:26:52.480 clat percentiles (usec): 00:26:52.480 | 1.00th=[ 306], 5.00th=[ 326], 10.00th=[ 330], 00:26:52.480 | 20.00th=[ 343], 30.00th=[ 359], 40.00th=[ 371], 00:26:52.480 | 50.00th=[ 396], 60.00th=[ 41157], 70.00th=[ 41157], 00:26:52.480 | 80.00th=[ 41157], 90.00th=[ 41157], 95.00th=[ 41157], 00:26:52.480 | 99.00th=[ 42206], 99.50th=[ 42206], 99.90th=[17112761], 00:26:52.480 | 99.95th=[17112761], 99.99th=[17112761] 00:26:52.480 write: IOPS=17, BW=68.3KiB/s (69.9kB/s)(4096KiB/60011msec); 0 zone resets 00:26:52.480 slat (nsec): min=6665, max=65026, avg=11621.81, stdev=6958.41 00:26:52.480 clat (usec): min=194, max=424, avg=230.07, stdev=30.63 00:26:52.480 lat (usec): min=203, max=468, avg=241.69, stdev=36.16 00:26:52.480 clat percentiles (usec): 00:26:52.480 | 1.00th=[ 198], 5.00th=[ 204], 10.00th=[ 206], 20.00th=[ 212], 00:26:52.480 | 30.00th=[ 215], 40.00th=[ 219], 50.00th=[ 221], 60.00th=[ 225], 00:26:52.480 | 70.00th=[ 231], 80.00th=[ 241], 90.00th=[ 269], 95.00th=[ 289], 00:26:52.480 | 99.00th=[ 367], 99.50th=[ 379], 99.90th=[ 416], 99.95th=[ 424], 00:26:52.480 | 99.99th=[ 424] 00:26:52.480 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=2 00:26:52.480 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:26:52.480 lat (usec) : 250=43.22%, 500=33.28%, 750=0.94% 00:26:52.480 lat (msec) : 10=0.05%, 50=22.45%, >=2000=0.05% 00:26:52.480 cpu : usr=0.04%, sys=0.07%, ctx=2014, majf=0, minf=2 00:26:52.480 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:52.480 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.480 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:52.480 issued rwts: total=989,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:52.480 latency : target=0, window=0, percentile=100.00%, depth=1 00:26:52.480 00:26:52.480 Run status group 0 (all jobs): 00:26:52.480 READ: bw=65.9KiB/s (67.5kB/s), 65.9KiB/s-65.9KiB/s (67.5kB/s-67.5kB/s), io=3956KiB (4051kB), run=60011-60011msec 00:26:52.480 WRITE: bw=68.3KiB/s (69.9kB/s), 68.3KiB/s-68.3KiB/s (69.9kB/s-69.9kB/s), io=4096KiB (4194kB), run=60011-60011msec 00:26:52.480 00:26:52.480 Disk stats (read/write): 00:26:52.480 nvme0n1: ios=1085/1024, merge=0/0, ticks=19849/231, in_queue=20080, util=99.81% 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:52.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1219 -- # local i=0 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # return 0 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:26:52.480 nvmf hotplug test: fio successful as expected 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@117 -- # sync 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@120 -- # set +e 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:52.480 rmmod nvme_tcp 00:26:52.480 rmmod nvme_fabrics 00:26:52.480 rmmod nvme_keyring 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set -e 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # return 0 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@489 -- # '[' -n 970360 ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@490 -- # killprocess 970360 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@948 -- # '[' -z 970360 ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@952 -- # kill -0 970360 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # uname 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 970360 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@966 -- # echo 'killing process with pid 970360' 00:26:52.480 killing process with pid 970360 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@967 -- # kill 970360 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # wait 970360 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:52.480 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:52.481 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:52.481 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:26:52.481 21:33:25 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:53.047 21:33:27 nvmf_tcp.nvmf_initiator_timeout -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:53.047 00:26:53.047 real 1m8.045s 00:26:53.047 user 4m10.974s 00:26:53.047 sys 0m6.290s 00:26:53.047 21:33:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:53.047 21:33:27 nvmf_tcp.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:53.047 ************************************ 00:26:53.047 END TEST nvmf_initiator_timeout 00:26:53.047 ************************************ 00:26:53.047 21:33:27 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:26:53.047 21:33:27 nvmf_tcp -- nvmf/nvmf.sh@71 -- # [[ phy == phy ]] 00:26:53.047 21:33:27 nvmf_tcp -- nvmf/nvmf.sh@72 -- # '[' tcp = tcp ']' 00:26:53.047 21:33:27 nvmf_tcp -- nvmf/nvmf.sh@73 -- # gather_supported_nvmf_pci_devs 00:26:53.047 21:33:27 nvmf_tcp -- nvmf/common.sh@285 -- # xtrace_disable 00:26:53.047 21:33:27 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@291 -- # pci_devs=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@295 -- # net_devs=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@296 -- # e810=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@296 -- # local -ga e810 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@297 -- # x722=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@297 -- # local -ga x722 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@298 -- # mlx=() 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@298 -- # local -ga mlx 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:54.952 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:54.952 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:54.952 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:54.952 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/nvmf.sh@74 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/nvmf.sh@75 -- # (( 2 > 0 )) 00:26:54.952 21:33:29 nvmf_tcp -- nvmf/nvmf.sh@76 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:54.952 21:33:29 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:54.952 21:33:29 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:54.952 21:33:29 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:26:54.952 ************************************ 00:26:54.952 START TEST nvmf_perf_adq 00:26:54.952 ************************************ 00:26:54.952 21:33:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:26:55.210 * Looking for test storage... 00:26:55.210 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:55.210 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:26:55.211 21:33:29 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:57.141 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:57.141 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.141 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:57.142 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:57.142 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:26:57.142 21:33:31 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:26:57.709 21:33:32 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:26:59.611 21:33:34 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:04.887 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:04.888 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:04.888 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:04.888 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:04.888 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:04.888 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:04.888 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:27:04.888 00:27:04.888 --- 10.0.0.2 ping statistics --- 00:27:04.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.888 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:04.888 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:04.888 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:27:04.888 00:27:04.888 --- 10.0.0.1 ping statistics --- 00:27:04.888 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:04.888 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=982983 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 982983 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 982983 ']' 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:04.888 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:04.888 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:04.888 [2024-07-11 21:33:39.542179] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:04.888 [2024-07-11 21:33:39.542271] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:04.888 EAL: No free 2048 kB hugepages reported on node 1 00:27:04.888 [2024-07-11 21:33:39.610448] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:05.147 [2024-07-11 21:33:39.706550] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:05.147 [2024-07-11 21:33:39.706600] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:05.147 [2024-07-11 21:33:39.706628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:05.147 [2024-07-11 21:33:39.706643] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:05.147 [2024-07-11 21:33:39.706654] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:05.147 [2024-07-11 21:33:39.706714] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:05.147 [2024-07-11 21:33:39.706778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:05.147 [2024-07-11 21:33:39.706806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:05.147 [2024-07-11 21:33:39.706809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.147 [2024-07-11 21:33:39.909268] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.147 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.404 Malloc1 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:05.404 [2024-07-11 21:33:39.960013] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=983128 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:05.404 21:33:39 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:27:05.404 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.300 21:33:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:27:07.300 21:33:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:07.300 21:33:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:07.300 21:33:41 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:07.300 21:33:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:27:07.300 "tick_rate": 2700000000, 00:27:07.300 "poll_groups": [ 00:27:07.300 { 00:27:07.300 "name": "nvmf_tgt_poll_group_000", 00:27:07.300 "admin_qpairs": 1, 00:27:07.300 "io_qpairs": 1, 00:27:07.300 "current_admin_qpairs": 1, 00:27:07.300 "current_io_qpairs": 1, 00:27:07.300 "pending_bdev_io": 0, 00:27:07.300 "completed_nvme_io": 21236, 00:27:07.300 "transports": [ 00:27:07.300 { 00:27:07.300 "trtype": "TCP" 00:27:07.300 } 00:27:07.300 ] 00:27:07.300 }, 00:27:07.300 { 00:27:07.300 "name": "nvmf_tgt_poll_group_001", 00:27:07.300 "admin_qpairs": 0, 00:27:07.300 "io_qpairs": 1, 00:27:07.300 "current_admin_qpairs": 0, 00:27:07.300 "current_io_qpairs": 1, 00:27:07.300 "pending_bdev_io": 0, 00:27:07.300 "completed_nvme_io": 21236, 00:27:07.300 "transports": [ 00:27:07.300 { 00:27:07.300 "trtype": "TCP" 00:27:07.300 } 00:27:07.300 ] 00:27:07.300 }, 00:27:07.300 { 00:27:07.300 "name": "nvmf_tgt_poll_group_002", 00:27:07.300 "admin_qpairs": 0, 00:27:07.300 "io_qpairs": 1, 00:27:07.300 "current_admin_qpairs": 0, 00:27:07.300 "current_io_qpairs": 1, 00:27:07.300 "pending_bdev_io": 0, 00:27:07.301 "completed_nvme_io": 18831, 00:27:07.301 "transports": [ 00:27:07.301 { 00:27:07.301 "trtype": "TCP" 00:27:07.301 } 00:27:07.301 ] 00:27:07.301 }, 00:27:07.301 { 00:27:07.301 "name": "nvmf_tgt_poll_group_003", 00:27:07.301 "admin_qpairs": 0, 00:27:07.301 "io_qpairs": 1, 00:27:07.301 "current_admin_qpairs": 0, 00:27:07.301 "current_io_qpairs": 1, 00:27:07.301 "pending_bdev_io": 0, 00:27:07.301 "completed_nvme_io": 20993, 00:27:07.301 "transports": [ 00:27:07.301 { 00:27:07.301 "trtype": "TCP" 00:27:07.301 } 00:27:07.301 ] 00:27:07.301 } 00:27:07.301 ] 00:27:07.301 }' 00:27:07.301 21:33:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:27:07.301 21:33:41 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:27:07.301 21:33:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:27:07.301 21:33:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:27:07.301 21:33:42 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 983128 00:27:15.401 Initializing NVMe Controllers 00:27:15.401 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:15.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:15.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:15.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:15.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:15.401 Initialization complete. Launching workers. 00:27:15.401 ======================================================== 00:27:15.401 Latency(us) 00:27:15.401 Device Information : IOPS MiB/s Average min max 00:27:15.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10994.71 42.95 5822.87 2983.66 8623.47 00:27:15.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 11134.51 43.49 5747.74 3070.98 7243.46 00:27:15.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 9835.13 38.42 6508.06 2800.55 10002.58 00:27:15.401 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 11117.41 43.43 5758.89 2657.49 9391.05 00:27:15.401 ======================================================== 00:27:15.401 Total : 43081.76 168.29 5943.37 2657.49 10002.58 00:27:15.401 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:15.401 rmmod nvme_tcp 00:27:15.401 rmmod nvme_fabrics 00:27:15.401 rmmod nvme_keyring 00:27:15.401 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 982983 ']' 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 982983 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 982983 ']' 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 982983 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 982983 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 982983' 00:27:15.658 killing process with pid 982983 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 982983 00:27:15.658 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 982983 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:15.916 21:33:50 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:17.816 21:33:52 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:17.816 21:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:27:17.816 21:33:52 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:27:18.384 21:33:53 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:27:20.909 21:33:55 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:26.234 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:26.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:26.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:26.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:26.235 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:26.235 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:26.235 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:27:26.235 00:27:26.235 --- 10.0.0.2 ping statistics --- 00:27:26.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.235 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:26.235 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:26.235 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:27:26.235 00:27:26.235 --- 10.0.0.1 ping statistics --- 00:27:26.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:26.235 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:26.235 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:27:26.236 net.core.busy_poll = 1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:27:26.236 net.core.busy_read = 1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=985731 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 985731 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 985731 ']' 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:26.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 [2024-07-11 21:34:00.444244] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:26.236 [2024-07-11 21:34:00.444347] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:26.236 EAL: No free 2048 kB hugepages reported on node 1 00:27:26.236 [2024-07-11 21:34:00.514279] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:26.236 [2024-07-11 21:34:00.605249] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:26.236 [2024-07-11 21:34:00.605312] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:26.236 [2024-07-11 21:34:00.605338] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:26.236 [2024-07-11 21:34:00.605353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:26.236 [2024-07-11 21:34:00.605366] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:26.236 [2024-07-11 21:34:00.605452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:26.236 [2024-07-11 21:34:00.605505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:26.236 [2024-07-11 21:34:00.605619] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:26.236 [2024-07-11 21:34:00.605622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 [2024-07-11 21:34:00.802277] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 Malloc1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.236 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:26.237 [2024-07-11 21:34:00.852926] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.237 21:34:00 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.237 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=985766 00:27:26.237 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:27:26.237 21:34:00 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:27:26.237 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.164 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:27:28.164 21:34:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:28.164 21:34:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:28.164 21:34:02 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:28.164 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:27:28.164 "tick_rate": 2700000000, 00:27:28.164 "poll_groups": [ 00:27:28.164 { 00:27:28.164 "name": "nvmf_tgt_poll_group_000", 00:27:28.164 "admin_qpairs": 1, 00:27:28.164 "io_qpairs": 1, 00:27:28.164 "current_admin_qpairs": 1, 00:27:28.164 "current_io_qpairs": 1, 00:27:28.164 "pending_bdev_io": 0, 00:27:28.164 "completed_nvme_io": 25950, 00:27:28.164 "transports": [ 00:27:28.164 { 00:27:28.164 "trtype": "TCP" 00:27:28.164 } 00:27:28.164 ] 00:27:28.164 }, 00:27:28.164 { 00:27:28.164 "name": "nvmf_tgt_poll_group_001", 00:27:28.164 "admin_qpairs": 0, 00:27:28.164 "io_qpairs": 3, 00:27:28.164 "current_admin_qpairs": 0, 00:27:28.164 "current_io_qpairs": 3, 00:27:28.164 "pending_bdev_io": 0, 00:27:28.164 "completed_nvme_io": 27264, 00:27:28.164 "transports": [ 00:27:28.164 { 00:27:28.164 "trtype": "TCP" 00:27:28.164 } 00:27:28.164 ] 00:27:28.164 }, 00:27:28.164 { 00:27:28.164 "name": "nvmf_tgt_poll_group_002", 00:27:28.164 "admin_qpairs": 0, 00:27:28.164 "io_qpairs": 0, 00:27:28.164 "current_admin_qpairs": 0, 00:27:28.164 "current_io_qpairs": 0, 00:27:28.164 "pending_bdev_io": 0, 00:27:28.164 "completed_nvme_io": 0, 00:27:28.164 "transports": [ 00:27:28.164 { 00:27:28.164 "trtype": "TCP" 00:27:28.164 } 00:27:28.164 ] 00:27:28.164 }, 00:27:28.164 { 00:27:28.164 "name": "nvmf_tgt_poll_group_003", 00:27:28.164 "admin_qpairs": 0, 00:27:28.164 "io_qpairs": 0, 00:27:28.164 "current_admin_qpairs": 0, 00:27:28.164 "current_io_qpairs": 0, 00:27:28.164 "pending_bdev_io": 0, 00:27:28.164 "completed_nvme_io": 0, 00:27:28.164 "transports": [ 00:27:28.164 { 00:27:28.165 "trtype": "TCP" 00:27:28.165 } 00:27:28.165 ] 00:27:28.165 } 00:27:28.165 ] 00:27:28.165 }' 00:27:28.165 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:27:28.165 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:27:28.165 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:27:28.165 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:27:28.165 21:34:02 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 985766 00:27:36.278 Initializing NVMe Controllers 00:27:36.278 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:36.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:27:36.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:27:36.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:27:36.278 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:27:36.278 Initialization complete. Launching workers. 00:27:36.278 ======================================================== 00:27:36.278 Latency(us) 00:27:36.278 Device Information : IOPS MiB/s Average min max 00:27:36.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5126.60 20.03 12485.19 2356.41 61546.31 00:27:36.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 3965.80 15.49 16205.30 2223.98 61002.41 00:27:36.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 13740.70 53.67 4657.98 1793.15 6717.08 00:27:36.278 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5213.90 20.37 12276.76 2005.24 61301.66 00:27:36.278 ======================================================== 00:27:36.278 Total : 28047.00 109.56 9137.78 1793.15 61546.31 00:27:36.278 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:36.278 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:36.278 rmmod nvme_tcp 00:27:36.535 rmmod nvme_fabrics 00:27:36.535 rmmod nvme_keyring 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 985731 ']' 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 985731 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 985731 ']' 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 985731 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 985731 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 985731' 00:27:36.535 killing process with pid 985731 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 985731 00:27:36.535 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 985731 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:36.794 21:34:11 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.076 21:34:14 nvmf_tcp.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:40.076 21:34:14 nvmf_tcp.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:27:40.076 00:27:40.076 real 0m44.764s 00:27:40.076 user 2m36.784s 00:27:40.076 sys 0m10.730s 00:27:40.076 21:34:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:40.076 21:34:14 nvmf_tcp.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:40.076 ************************************ 00:27:40.076 END TEST nvmf_perf_adq 00:27:40.076 ************************************ 00:27:40.076 21:34:14 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:40.076 21:34:14 nvmf_tcp -- nvmf/nvmf.sh@83 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:40.076 21:34:14 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:40.076 21:34:14 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.076 21:34:14 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:40.076 ************************************ 00:27:40.076 START TEST nvmf_shutdown 00:27:40.076 ************************************ 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:27:40.076 * Looking for test storage... 00:27:40.076 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:40.076 21:34:14 nvmf_tcp.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:40.077 ************************************ 00:27:40.077 START TEST nvmf_shutdown_tc1 00:27:40.077 ************************************ 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:40.077 21:34:14 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:42.021 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:42.021 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:42.021 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:42.021 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:42.021 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:42.021 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:42.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.229 ms 00:27:42.022 00:27:42.022 --- 10.0.0.2 ping statistics --- 00:27:42.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.022 rtt min/avg/max/mdev = 0.229/0.229/0.229/0.000 ms 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:42.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:42.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.181 ms 00:27:42.022 00:27:42.022 --- 10.0.0.1 ping statistics --- 00:27:42.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:42.022 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=989045 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 989045 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 989045 ']' 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:42.022 21:34:16 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.279 [2024-07-11 21:34:16.810130] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:42.279 [2024-07-11 21:34:16.810202] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:42.279 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.279 [2024-07-11 21:34:16.880959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:42.279 [2024-07-11 21:34:16.972520] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:42.279 [2024-07-11 21:34:16.972582] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:42.280 [2024-07-11 21:34:16.972597] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:42.280 [2024-07-11 21:34:16.972611] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:42.280 [2024-07-11 21:34:16.972623] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:42.280 [2024-07-11 21:34:16.972725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:42.280 [2024-07-11 21:34:16.972820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:42.280 [2024-07-11 21:34:16.972874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:42.280 [2024-07-11 21:34:16.972877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 [2024-07-11 21:34:17.128676] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:42.537 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:42.537 Malloc1 00:27:42.537 [2024-07-11 21:34:17.213383] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:42.537 Malloc2 00:27:42.537 Malloc3 00:27:42.795 Malloc4 00:27:42.795 Malloc5 00:27:42.795 Malloc6 00:27:42.795 Malloc7 00:27:42.795 Malloc8 00:27:43.053 Malloc9 00:27:43.053 Malloc10 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=989225 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 989225 /var/tmp/bdevperf.sock 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 989225 ']' 00:27:43.053 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:43.054 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.054 "name": "Nvme$subsystem", 00:27:43.054 "trtype": "$TEST_TRANSPORT", 00:27:43.054 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.054 "adrfam": "ipv4", 00:27:43.054 "trsvcid": "$NVMF_PORT", 00:27:43.054 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.054 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.054 "hdgst": ${hdgst:-false}, 00:27:43.054 "ddgst": ${ddgst:-false} 00:27:43.054 }, 00:27:43.054 "method": "bdev_nvme_attach_controller" 00:27:43.054 } 00:27:43.054 EOF 00:27:43.054 )") 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.054 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.054 { 00:27:43.054 "params": { 00:27:43.055 "name": "Nvme$subsystem", 00:27:43.055 "trtype": "$TEST_TRANSPORT", 00:27:43.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "$NVMF_PORT", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.055 "hdgst": ${hdgst:-false}, 00:27:43.055 "ddgst": ${ddgst:-false} 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 } 00:27:43.055 EOF 00:27:43.055 )") 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.055 { 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme$subsystem", 00:27:43.055 "trtype": "$TEST_TRANSPORT", 00:27:43.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "$NVMF_PORT", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.055 "hdgst": ${hdgst:-false}, 00:27:43.055 "ddgst": ${ddgst:-false} 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 } 00:27:43.055 EOF 00:27:43.055 )") 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:43.055 { 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme$subsystem", 00:27:43.055 "trtype": "$TEST_TRANSPORT", 00:27:43.055 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "$NVMF_PORT", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:43.055 "hdgst": ${hdgst:-false}, 00:27:43.055 "ddgst": ${ddgst:-false} 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 } 00:27:43.055 EOF 00:27:43.055 )") 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:43.055 21:34:17 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme1", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme2", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme3", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme4", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme5", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme6", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme7", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme8", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:43.055 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:43.055 "hdgst": false, 00:27:43.055 "ddgst": false 00:27:43.055 }, 00:27:43.055 "method": "bdev_nvme_attach_controller" 00:27:43.055 },{ 00:27:43.055 "params": { 00:27:43.055 "name": "Nvme9", 00:27:43.055 "trtype": "tcp", 00:27:43.055 "traddr": "10.0.0.2", 00:27:43.055 "adrfam": "ipv4", 00:27:43.055 "trsvcid": "4420", 00:27:43.055 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:43.056 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:43.056 "hdgst": false, 00:27:43.056 "ddgst": false 00:27:43.056 }, 00:27:43.056 "method": "bdev_nvme_attach_controller" 00:27:43.056 },{ 00:27:43.056 "params": { 00:27:43.056 "name": "Nvme10", 00:27:43.056 "trtype": "tcp", 00:27:43.056 "traddr": "10.0.0.2", 00:27:43.056 "adrfam": "ipv4", 00:27:43.056 "trsvcid": "4420", 00:27:43.056 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:43.056 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:43.056 "hdgst": false, 00:27:43.056 "ddgst": false 00:27:43.056 }, 00:27:43.056 "method": "bdev_nvme_attach_controller" 00:27:43.056 }' 00:27:43.056 [2024-07-11 21:34:17.739614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:43.056 [2024-07-11 21:34:17.739706] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:27:43.056 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.056 [2024-07-11 21:34:17.804896] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.313 [2024-07-11 21:34:17.893726] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:45.207 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:45.207 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 989225 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:27:45.208 21:34:19 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:27:46.139 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 989225 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 989045 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.139 { 00:27:46.139 "params": { 00:27:46.139 "name": "Nvme$subsystem", 00:27:46.139 "trtype": "$TEST_TRANSPORT", 00:27:46.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.139 "adrfam": "ipv4", 00:27:46.139 "trsvcid": "$NVMF_PORT", 00:27:46.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.139 "hdgst": ${hdgst:-false}, 00:27:46.139 "ddgst": ${ddgst:-false} 00:27:46.139 }, 00:27:46.139 "method": "bdev_nvme_attach_controller" 00:27:46.139 } 00:27:46.139 EOF 00:27:46.139 )") 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.139 { 00:27:46.139 "params": { 00:27:46.139 "name": "Nvme$subsystem", 00:27:46.139 "trtype": "$TEST_TRANSPORT", 00:27:46.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.139 "adrfam": "ipv4", 00:27:46.139 "trsvcid": "$NVMF_PORT", 00:27:46.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.139 "hdgst": ${hdgst:-false}, 00:27:46.139 "ddgst": ${ddgst:-false} 00:27:46.139 }, 00:27:46.139 "method": "bdev_nvme_attach_controller" 00:27:46.139 } 00:27:46.139 EOF 00:27:46.139 )") 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.139 { 00:27:46.139 "params": { 00:27:46.139 "name": "Nvme$subsystem", 00:27:46.139 "trtype": "$TEST_TRANSPORT", 00:27:46.139 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.139 "adrfam": "ipv4", 00:27:46.139 "trsvcid": "$NVMF_PORT", 00:27:46.139 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.139 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.139 "hdgst": ${hdgst:-false}, 00:27:46.139 "ddgst": ${ddgst:-false} 00:27:46.139 }, 00:27:46.139 "method": "bdev_nvme_attach_controller" 00:27:46.139 } 00:27:46.139 EOF 00:27:46.139 )") 00:27:46.139 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:46.140 { 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme$subsystem", 00:27:46.140 "trtype": "$TEST_TRANSPORT", 00:27:46.140 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "$NVMF_PORT", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:46.140 "hdgst": ${hdgst:-false}, 00:27:46.140 "ddgst": ${ddgst:-false} 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 } 00:27:46.140 EOF 00:27:46.140 )") 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:27:46.140 21:34:20 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme1", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme2", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme3", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme4", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme5", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme6", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme7", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme8", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.140 "trsvcid": "4420", 00:27:46.140 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:46.140 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:46.140 "hdgst": false, 00:27:46.140 "ddgst": false 00:27:46.140 }, 00:27:46.140 "method": "bdev_nvme_attach_controller" 00:27:46.140 },{ 00:27:46.140 "params": { 00:27:46.140 "name": "Nvme9", 00:27:46.140 "trtype": "tcp", 00:27:46.140 "traddr": "10.0.0.2", 00:27:46.140 "adrfam": "ipv4", 00:27:46.141 "trsvcid": "4420", 00:27:46.141 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:46.141 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:46.141 "hdgst": false, 00:27:46.141 "ddgst": false 00:27:46.141 }, 00:27:46.141 "method": "bdev_nvme_attach_controller" 00:27:46.141 },{ 00:27:46.141 "params": { 00:27:46.141 "name": "Nvme10", 00:27:46.141 "trtype": "tcp", 00:27:46.141 "traddr": "10.0.0.2", 00:27:46.141 "adrfam": "ipv4", 00:27:46.141 "trsvcid": "4420", 00:27:46.141 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:46.141 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:46.141 "hdgst": false, 00:27:46.141 "ddgst": false 00:27:46.141 }, 00:27:46.141 "method": "bdev_nvme_attach_controller" 00:27:46.141 }' 00:27:46.141 [2024-07-11 21:34:20.786936] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:46.141 [2024-07-11 21:34:20.787023] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid989642 ] 00:27:46.141 EAL: No free 2048 kB hugepages reported on node 1 00:27:46.141 [2024-07-11 21:34:20.854008] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:46.398 [2024-07-11 21:34:20.944024] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:47.768 Running I/O for 1 seconds... 00:27:49.140 00:27:49.140 Latency(us) 00:27:49.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:49.140 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme1n1 : 1.06 181.44 11.34 0.00 0.00 349227.24 23107.51 284280.60 00:27:49.140 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme2n1 : 1.13 230.78 14.42 0.00 0.00 268040.75 6456.51 264085.81 00:27:49.140 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme3n1 : 1.15 223.23 13.95 0.00 0.00 274678.14 20777.34 268746.15 00:27:49.140 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme4n1 : 1.09 235.00 14.69 0.00 0.00 254120.01 17961.72 260978.92 00:27:49.140 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme5n1 : 1.15 226.53 14.16 0.00 0.00 260816.29 5776.88 265639.25 00:27:49.140 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme6n1 : 1.15 221.67 13.85 0.00 0.00 263134.81 21456.97 268746.15 00:27:49.140 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme7n1 : 1.13 225.84 14.11 0.00 0.00 253376.47 21554.06 270299.59 00:27:49.140 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme8n1 : 1.19 269.69 16.86 0.00 0.00 209550.75 13786.83 260978.92 00:27:49.140 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme9n1 : 1.18 216.90 13.56 0.00 0.00 256193.42 21359.88 285834.05 00:27:49.140 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:49.140 Verification LBA range: start 0x0 length 0x400 00:27:49.140 Nvme10n1 : 1.20 270.49 16.91 0.00 0.00 202208.19 5995.33 290494.39 00:27:49.140 =================================================================================================================== 00:27:49.140 Total : 2301.56 143.85 0.00 0.00 254296.14 5776.88 290494.39 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:49.140 rmmod nvme_tcp 00:27:49.140 rmmod nvme_fabrics 00:27:49.140 rmmod nvme_keyring 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 989045 ']' 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 989045 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 989045 ']' 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 989045 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 989045 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 989045' 00:27:49.140 killing process with pid 989045 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 989045 00:27:49.140 21:34:23 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 989045 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:49.706 21:34:24 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:52.235 00:27:52.235 real 0m11.814s 00:27:52.235 user 0m34.116s 00:27:52.235 sys 0m3.255s 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:52.235 ************************************ 00:27:52.235 END TEST nvmf_shutdown_tc1 00:27:52.235 ************************************ 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:27:52.235 ************************************ 00:27:52.235 START TEST nvmf_shutdown_tc2 00:27:52.235 ************************************ 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:27:52.235 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:27:52.235 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:27:52.235 Found net devices under 0000:0a:00.0: cvl_0_0 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:52.235 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:27:52.236 Found net devices under 0000:0a:00.1: cvl_0_1 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:52.236 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:52.236 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.206 ms 00:27:52.236 00:27:52.236 --- 10.0.0.2 ping statistics --- 00:27:52.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.236 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:52.236 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:52.236 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:27:52.236 00:27:52.236 --- 10.0.0.1 ping statistics --- 00:27:52.236 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:52.236 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=990406 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 990406 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 990406 ']' 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:52.236 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.236 [2024-07-11 21:34:26.688238] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:52.236 [2024-07-11 21:34:26.688322] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:52.236 EAL: No free 2048 kB hugepages reported on node 1 00:27:52.236 [2024-07-11 21:34:26.757626] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:52.236 [2024-07-11 21:34:26.847273] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:52.236 [2024-07-11 21:34:26.847336] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:52.236 [2024-07-11 21:34:26.847350] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:52.236 [2024-07-11 21:34:26.847376] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:52.236 [2024-07-11 21:34:26.847385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:52.236 [2024-07-11 21:34:26.847469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:52.236 [2024-07-11 21:34:26.847532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:52.236 [2024-07-11 21:34:26.847553] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:52.236 [2024-07-11 21:34:26.847557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.236 21:34:26 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.236 [2024-07-11 21:34:27.000703] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:52.496 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:52.496 Malloc1 00:27:52.496 [2024-07-11 21:34:27.090424] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.496 Malloc2 00:27:52.496 Malloc3 00:27:52.496 Malloc4 00:27:52.496 Malloc5 00:27:52.782 Malloc6 00:27:52.782 Malloc7 00:27:52.782 Malloc8 00:27:52.782 Malloc9 00:27:52.782 Malloc10 00:27:52.782 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:52.782 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:27:52.782 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:52.782 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=990586 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 990586 /var/tmp/bdevperf.sock 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 990586 ']' 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:53.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.040 { 00:27:53.040 "params": { 00:27:53.040 "name": "Nvme$subsystem", 00:27:53.040 "trtype": "$TEST_TRANSPORT", 00:27:53.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.040 "adrfam": "ipv4", 00:27:53.040 "trsvcid": "$NVMF_PORT", 00:27:53.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.040 "hdgst": ${hdgst:-false}, 00:27:53.040 "ddgst": ${ddgst:-false} 00:27:53.040 }, 00:27:53.040 "method": "bdev_nvme_attach_controller" 00:27:53.040 } 00:27:53.040 EOF 00:27:53.040 )") 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.040 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.040 { 00:27:53.040 "params": { 00:27:53.040 "name": "Nvme$subsystem", 00:27:53.040 "trtype": "$TEST_TRANSPORT", 00:27:53.040 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.040 "adrfam": "ipv4", 00:27:53.040 "trsvcid": "$NVMF_PORT", 00:27:53.040 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.040 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.040 "hdgst": ${hdgst:-false}, 00:27:53.040 "ddgst": ${ddgst:-false} 00:27:53.040 }, 00:27:53.040 "method": "bdev_nvme_attach_controller" 00:27:53.040 } 00:27:53.040 EOF 00:27:53.040 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:27:53.041 { 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme$subsystem", 00:27:53.041 "trtype": "$TEST_TRANSPORT", 00:27:53.041 "traddr": "$NVMF_FIRST_TARGET_IP", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "$NVMF_PORT", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:27:53.041 "hdgst": ${hdgst:-false}, 00:27:53.041 "ddgst": ${ddgst:-false} 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 } 00:27:53.041 EOF 00:27:53.041 )") 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:27:53.041 21:34:27 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme1", 00:27:53.041 "trtype": "tcp", 00:27:53.041 "traddr": "10.0.0.2", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "4420", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:27:53.041 "hdgst": false, 00:27:53.041 "ddgst": false 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 },{ 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme2", 00:27:53.041 "trtype": "tcp", 00:27:53.041 "traddr": "10.0.0.2", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "4420", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:27:53.041 "hdgst": false, 00:27:53.041 "ddgst": false 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 },{ 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme3", 00:27:53.041 "trtype": "tcp", 00:27:53.041 "traddr": "10.0.0.2", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "4420", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:27:53.041 "hdgst": false, 00:27:53.041 "ddgst": false 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 },{ 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme4", 00:27:53.041 "trtype": "tcp", 00:27:53.041 "traddr": "10.0.0.2", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "4420", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:27:53.041 "hdgst": false, 00:27:53.041 "ddgst": false 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 },{ 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme5", 00:27:53.041 "trtype": "tcp", 00:27:53.041 "traddr": "10.0.0.2", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "4420", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:27:53.041 "hdgst": false, 00:27:53.041 "ddgst": false 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.041 },{ 00:27:53.041 "params": { 00:27:53.041 "name": "Nvme6", 00:27:53.041 "trtype": "tcp", 00:27:53.041 "traddr": "10.0.0.2", 00:27:53.041 "adrfam": "ipv4", 00:27:53.041 "trsvcid": "4420", 00:27:53.041 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:27:53.041 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:27:53.041 "hdgst": false, 00:27:53.041 "ddgst": false 00:27:53.041 }, 00:27:53.041 "method": "bdev_nvme_attach_controller" 00:27:53.042 },{ 00:27:53.042 "params": { 00:27:53.042 "name": "Nvme7", 00:27:53.042 "trtype": "tcp", 00:27:53.042 "traddr": "10.0.0.2", 00:27:53.042 "adrfam": "ipv4", 00:27:53.042 "trsvcid": "4420", 00:27:53.042 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:27:53.042 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:27:53.042 "hdgst": false, 00:27:53.042 "ddgst": false 00:27:53.042 }, 00:27:53.042 "method": "bdev_nvme_attach_controller" 00:27:53.042 },{ 00:27:53.042 "params": { 00:27:53.042 "name": "Nvme8", 00:27:53.042 "trtype": "tcp", 00:27:53.042 "traddr": "10.0.0.2", 00:27:53.042 "adrfam": "ipv4", 00:27:53.042 "trsvcid": "4420", 00:27:53.042 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:27:53.042 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:27:53.042 "hdgst": false, 00:27:53.042 "ddgst": false 00:27:53.042 }, 00:27:53.042 "method": "bdev_nvme_attach_controller" 00:27:53.042 },{ 00:27:53.042 "params": { 00:27:53.042 "name": "Nvme9", 00:27:53.042 "trtype": "tcp", 00:27:53.042 "traddr": "10.0.0.2", 00:27:53.042 "adrfam": "ipv4", 00:27:53.042 "trsvcid": "4420", 00:27:53.042 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:27:53.042 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:27:53.042 "hdgst": false, 00:27:53.042 "ddgst": false 00:27:53.042 }, 00:27:53.042 "method": "bdev_nvme_attach_controller" 00:27:53.042 },{ 00:27:53.042 "params": { 00:27:53.042 "name": "Nvme10", 00:27:53.042 "trtype": "tcp", 00:27:53.042 "traddr": "10.0.0.2", 00:27:53.042 "adrfam": "ipv4", 00:27:53.042 "trsvcid": "4420", 00:27:53.042 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:27:53.042 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:27:53.042 "hdgst": false, 00:27:53.042 "ddgst": false 00:27:53.042 }, 00:27:53.042 "method": "bdev_nvme_attach_controller" 00:27:53.042 }' 00:27:53.042 [2024-07-11 21:34:27.592056] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:27:53.042 [2024-07-11 21:34:27.592163] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid990586 ] 00:27:53.042 EAL: No free 2048 kB hugepages reported on node 1 00:27:53.042 [2024-07-11 21:34:27.656859] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:53.042 [2024-07-11 21:34:27.744377] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:54.939 Running I/O for 10 seconds... 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:54.939 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.197 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.197 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:27:55.197 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:27:55.197 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:55.455 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:55.455 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.455 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.455 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.455 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.455 21:34:29 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.455 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.455 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:27:55.455 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:27:55.455 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:27:55.713 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 990586 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 990586 ']' 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 990586 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 990586 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 990586' 00:27:55.714 killing process with pid 990586 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 990586 00:27:55.714 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 990586 00:27:55.714 Received shutdown signal, test time was about 0.952910 seconds 00:27:55.714 00:27:55.714 Latency(us) 00:27:55.714 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:55.714 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme1n1 : 0.91 210.27 13.14 0.00 0.00 300183.70 28738.75 260978.92 00:27:55.714 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme2n1 : 0.91 211.70 13.23 0.00 0.00 292817.22 18350.08 270299.59 00:27:55.714 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme3n1 : 0.95 268.89 16.81 0.00 0.00 225474.18 24369.68 268746.15 00:27:55.714 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme4n1 : 0.94 276.82 17.30 0.00 0.00 214710.29 5218.61 248551.35 00:27:55.714 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme5n1 : 0.95 269.71 16.86 0.00 0.00 215176.44 9126.49 264085.81 00:27:55.714 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme6n1 : 0.94 205.07 12.82 0.00 0.00 279082.22 26408.58 316902.97 00:27:55.714 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme7n1 : 0.93 207.04 12.94 0.00 0.00 270196.94 40777.96 248551.35 00:27:55.714 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme8n1 : 0.92 208.65 13.04 0.00 0.00 262016.32 18641.35 251658.24 00:27:55.714 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme9n1 : 0.93 206.47 12.90 0.00 0.00 259592.85 21651.15 267192.70 00:27:55.714 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:27:55.714 Verification LBA range: start 0x0 length 0x400 00:27:55.714 Nvme10n1 : 0.94 207.54 12.97 0.00 0.00 251996.45 3131.16 285834.05 00:27:55.714 =================================================================================================================== 00:27:55.714 Total : 2272.14 142.01 0.00 0.00 253514.37 3131.16 316902.97 00:27:55.971 21:34:30 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 990406 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:57.342 rmmod nvme_tcp 00:27:57.342 rmmod nvme_fabrics 00:27:57.342 rmmod nvme_keyring 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 990406 ']' 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 990406 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 990406 ']' 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 990406 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 990406 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 990406' 00:27:57.342 killing process with pid 990406 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 990406 00:27:57.342 21:34:31 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 990406 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:27:57.600 21:34:32 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:00.129 00:28:00.129 real 0m7.860s 00:28:00.129 user 0m23.896s 00:28:00.129 sys 0m1.590s 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:28:00.129 ************************************ 00:28:00.129 END TEST nvmf_shutdown_tc2 00:28:00.129 ************************************ 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:00.129 ************************************ 00:28:00.129 START TEST nvmf_shutdown_tc3 00:28:00.129 ************************************ 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:00.129 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.129 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:00.130 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:00.130 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:00.130 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:00.130 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:00.130 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.251 ms 00:28:00.130 00:28:00.130 --- 10.0.0.2 ping statistics --- 00:28:00.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.130 rtt min/avg/max/mdev = 0.251/0.251/0.251/0.000 ms 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:00.130 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:00.130 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.160 ms 00:28:00.130 00:28:00.130 --- 10.0.0.1 ping statistics --- 00:28:00.130 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:00.130 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=991494 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 991494 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 991494 ']' 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.130 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:00.130 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.131 [2024-07-11 21:34:34.586939] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:00.131 [2024-07-11 21:34:34.587016] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:00.131 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.131 [2024-07-11 21:34:34.656811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:00.131 [2024-07-11 21:34:34.748032] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:00.131 [2024-07-11 21:34:34.748101] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:00.131 [2024-07-11 21:34:34.748128] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:00.131 [2024-07-11 21:34:34.748142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:00.131 [2024-07-11 21:34:34.748154] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:00.131 [2024-07-11 21:34:34.748252] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:00.131 [2024-07-11 21:34:34.748349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:00.131 [2024-07-11 21:34:34.748416] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:28:00.131 [2024-07-11 21:34:34.748418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:00.131 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.389 [2024-07-11 21:34:34.902693] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:00.389 21:34:34 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.389 Malloc1 00:28:00.389 [2024-07-11 21:34:34.990600] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:00.389 Malloc2 00:28:00.389 Malloc3 00:28:00.389 Malloc4 00:28:00.389 Malloc5 00:28:00.647 Malloc6 00:28:00.647 Malloc7 00:28:00.647 Malloc8 00:28:00.647 Malloc9 00:28:00.647 Malloc10 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=991661 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 991661 /var/tmp/bdevperf.sock 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 991661 ']' 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:00.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.907 EOF 00:28:00.907 )") 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.907 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.907 { 00:28:00.907 "params": { 00:28:00.907 "name": "Nvme$subsystem", 00:28:00.907 "trtype": "$TEST_TRANSPORT", 00:28:00.907 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.907 "adrfam": "ipv4", 00:28:00.907 "trsvcid": "$NVMF_PORT", 00:28:00.907 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.907 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.907 "hdgst": ${hdgst:-false}, 00:28:00.907 "ddgst": ${ddgst:-false} 00:28:00.907 }, 00:28:00.907 "method": "bdev_nvme_attach_controller" 00:28:00.907 } 00:28:00.908 EOF 00:28:00.908 )") 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:00.908 { 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme$subsystem", 00:28:00.908 "trtype": "$TEST_TRANSPORT", 00:28:00.908 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "$NVMF_PORT", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:00.908 "hdgst": ${hdgst:-false}, 00:28:00.908 "ddgst": ${ddgst:-false} 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 } 00:28:00.908 EOF 00:28:00.908 )") 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:28:00.908 21:34:35 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme1", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme2", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme3", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme4", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme5", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme6", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme7", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme8", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme9", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 },{ 00:28:00.908 "params": { 00:28:00.908 "name": "Nvme10", 00:28:00.908 "trtype": "tcp", 00:28:00.908 "traddr": "10.0.0.2", 00:28:00.908 "adrfam": "ipv4", 00:28:00.908 "trsvcid": "4420", 00:28:00.908 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:00.908 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:00.908 "hdgst": false, 00:28:00.908 "ddgst": false 00:28:00.908 }, 00:28:00.908 "method": "bdev_nvme_attach_controller" 00:28:00.908 }' 00:28:00.908 [2024-07-11 21:34:35.483837] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:00.908 [2024-07-11 21:34:35.483916] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid991661 ] 00:28:00.908 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.908 [2024-07-11 21:34:35.548772] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:00.908 [2024-07-11 21:34:35.635586] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:02.281 Running I/O for 10 seconds... 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:28:02.847 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 991494 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 991494 ']' 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 991494 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 991494 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 991494' 00:28:03.115 killing process with pid 991494 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 991494 00:28:03.115 21:34:37 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 991494 00:28:03.115 [2024-07-11 21:34:37.811450] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811569] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811594] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811618] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811688] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811721] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811750] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811772] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811813] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811827] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.115 [2024-07-11 21:34:37.811841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811855] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811869] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811882] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811897] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811936] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811948] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811963] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811977] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.811990] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812002] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812043] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812076] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812091] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812146] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812161] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812175] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812238] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812262] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812274] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812286] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812299] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812311] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812323] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812335] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812383] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812395] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812407] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812419] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.812431] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023790 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813872] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813910] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813959] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.813988] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814001] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814014] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814026] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814039] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814064] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814105] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814117] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814131] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814144] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814156] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814182] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814206] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814219] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814231] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814245] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814258] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814270] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814282] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814294] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814306] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814320] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814390] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814403] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814415] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814427] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814453] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814465] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814477] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814516] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814552] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814576] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814589] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814601] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814613] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814638] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814662] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814689] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814702] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.814726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2026190 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816171] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816234] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816272] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816285] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816297] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816325] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816337] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816349] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816361] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.116 [2024-07-11 21:34:37.816385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816424] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816436] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816448] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816460] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816501] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816513] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816525] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816537] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816549] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816560] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816584] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816596] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816608] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816645] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816681] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816693] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816729] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816740] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816759] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816788] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816820] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816873] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816898] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816911] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816923] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816935] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816947] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816960] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816972] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.816995] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2023c30 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.817043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817115] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817153] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc8b0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.817337] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817376] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.117 [2024-07-11 21:34:37.817451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.117 [2024-07-11 21:34:37.817464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59290 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819851] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819887] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819902] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819914] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819928] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819941] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819956] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819969] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819981] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.819993] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820005] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820017] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820030] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820042] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820062] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820087] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820113] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820127] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820139] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820160] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820176] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820188] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820213] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820225] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820291] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820304] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820317] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820329] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820341] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820356] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820368] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820381] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820393] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820404] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820416] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820428] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820466] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.117 [2024-07-11 21:34:37.820479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820491] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820508] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820533] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820546] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820558] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820570] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820595] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820607] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820620] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820632] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820644] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820657] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820669] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.820682] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20240d0 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823078] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823132] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823145] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823158] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823173] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823187] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823200] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823226] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823239] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823252] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823277] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823293] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823307] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823319] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823332] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823345] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823373] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823398] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823479] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823504] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823526] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823561] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823587] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823610] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823633] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823656] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823671] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823684] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823696] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823708] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823717] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.118 [2024-07-11 21:34:37.823733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823771] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823785] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823802] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823816] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.118 [2024-07-11 21:34:37.823952] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823973] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823986] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.823998] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824011] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824054] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824067] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824079] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824092] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824104] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824116] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824128] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824190] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824202] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824214] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024590 is same with the state(5) to be set 00:28:03.118 [2024-07-11 21:34:37.824668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.118 [2024-07-11 21:34:37.824919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.118 [2024-07-11 21:34:37.824936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.824950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.824967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.824981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.824997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825539] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.825540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:12the state(5) to be set 00:28:03.119 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825557] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.825558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:03.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825572] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825585] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825598] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825612] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825625] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825639] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825652] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825665] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825677] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825705] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.825704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:12the state(5) to be set 00:28:03.119 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825720] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825733] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825746] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825769] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825784] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.825784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:12the state(5) to be set 00:28:03.119 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825807] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.825806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:03.119 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825822] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825835] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825848] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:12[2024-07-11 21:34:37.825861] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 21:34:37.825875] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825890] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825903] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 [2024-07-11 21:34:37.825916] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825929] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 21:34:37.825942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.119 the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825958] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.119 [2024-07-11 21:34:37.825971] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.119 [2024-07-11 21:34:37.825977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.825984] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.825994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.825997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826024] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826036] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826059] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826072] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826085] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826098] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 21:34:37.826111] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826125] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826137] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826154] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826168] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826181] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:12[2024-07-11 21:34:37.826194] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826209] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.826209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(5) to be set 00:28:03.120 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:12[2024-07-11 21:34:37.826261] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826275] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826301] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826338] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826351] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826364] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with [2024-07-11 21:34:37.826364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:12the state(5) to be set 00:28:03.120 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826378] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826391] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024a30 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.120 [2024-07-11 21:34:37.826805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.826821] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec1d40 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.826902] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ec1d40 was disconnected and freed. reset controller. 00:28:03.120 [2024-07-11 21:34:37.828185] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc8b0 (9): Bad file descriptor 00:28:03.120 [2024-07-11 21:34:37.828221] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828237] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828250] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828263] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828276] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828288] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828313] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828326] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828330] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 ns[2024-07-11 21:34:37.828339] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with id:0 cdw10:00000000 cdw11:00000000 00:28:03.120 the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828353] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828366] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828403] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828412] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 c[2024-07-11 21:34:37.828426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828441] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828442] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828454] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828467] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828471] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24e10 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828481] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828494] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828520] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2024ef0 is same with the state(5) to be set 00:28:03.120 [2024-07-11 21:34:37.828524] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828619] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.120 [2024-07-11 21:34:37.828633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.120 [2024-07-11 21:34:37.828651] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d79830 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.828699] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828735] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828774] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828807] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95b50 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.828880] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828917] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.828973] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.121 [2024-07-11 21:34:37.828987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.829000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7d700 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d59290 (9): Bad file descriptor 00:28:03.121 [2024-07-11 21:34:37.829476] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829507] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829522] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829535] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829550] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829564] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829582] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829609] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829624] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829637] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829650] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829663] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829691] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829704] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829717] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829731] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829745] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829767] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829781] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829800] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829815] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829828] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829841] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829853] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829886] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829900] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829913] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829926] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829938] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829951] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829967] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829983] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.829997] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830010] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830023] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830035] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830048] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830060] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830075] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830088] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830100] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830112] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830141] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830153] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830165] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830189] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830201] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830212] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830224] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830236] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830260] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830287] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830300] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830312] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830330] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830372] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025390 is same with the state(5) to be set 00:28:03.121 [2024-07-11 21:34:37.830458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.830977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.830993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.831008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.831024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.121 [2024-07-11 21:34:37.831038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.121 [2024-07-11 21:34:37.831070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.122 [2024-07-11 21:34:37.831276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.122 [2024-07-11 21:34:37.831308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831486] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025830 is same with [2024-07-11 21:34:37.831497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:12the state(5) to be set 00:28:03.123 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831518] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025830 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.831531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831534] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025830 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.831546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 21:34:37.831548] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025830 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.831562] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025830 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.831565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.831967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.831982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832095] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with [2024-07-11 21:34:37.832105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:12the state(5) to be set 00:28:03.123 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832123] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832148] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832164] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832177] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-07-11 21:34:37.832191] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832207] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with [2024-07-11 21:34:37.832208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:12the state(5) to be set 00:28:03.123 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832222] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832235] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832248] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.123 [2024-07-11 21:34:37.832265] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.123 [2024-07-11 21:34:37.832281] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832309] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832322] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832334] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832347] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832359] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832371] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832385] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832399] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832411] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832426] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832438] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832451] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832463] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832475] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832489] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832502] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832515] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832527] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.123 [2024-07-11 21:34:37.832541] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832554] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832566] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832579] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832597] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832614] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832627] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832640] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832653] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832667] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832678] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832690] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832703] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832714] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832726] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832739] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832751] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832787] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832805] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832817] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832829] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832842] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832854] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832867] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832880] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832893] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832905] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832918] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832930] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832942] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832954] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832966] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.832982] tcp.c:1607:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2025cd0 is same with the state(5) to be set 00:28:03.124 [2024-07-11 21:34:37.857766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.857866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.857885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.857902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.857916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.857933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.857947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.857963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.857979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.857996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.858011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.858028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.858042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.858059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.858073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.858090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.858104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.858274] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ec3ff0 was disconnected and freed. reset controller. 00:28:03.124 [2024-07-11 21:34:37.859068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.859965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.859980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.124 [2024-07-11 21:34:37.860187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.124 [2024-07-11 21:34:37.860202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.860973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.860990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.861004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.861035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.861066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.861097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.861127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861185] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:28:03.125 [2024-07-11 21:34:37.861262] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ec6950 was disconnected and freed. reset controller. 00:28:03.125 [2024-07-11 21:34:37.861393] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:03.125 [2024-07-11 21:34:37.861472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f24e10 (9): Bad file descriptor 00:28:03.125 [2024-07-11 21:34:37.861569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861694] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db2370 is same with the state(5) to be set 00:28:03.125 [2024-07-11 21:34:37.861744] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861817] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861846] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf910 is same with the state(5) to be set 00:28:03.125 [2024-07-11 21:34:37.861914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861950] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.861979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.861993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.862008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.862027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.862041] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7cfd0 is same with the state(5) to be set 00:28:03.125 [2024-07-11 21:34:37.862090] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.862112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.862127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.862141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.862156] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.862170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.862185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:03.125 [2024-07-11 21:34:37.862198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.862211] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851610 is same with the state(5) to be set 00:28:03.125 [2024-07-11 21:34:37.862234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d79830 (9): Bad file descriptor 00:28:03.125 [2024-07-11 21:34:37.862263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95b50 (9): Bad file descriptor 00:28:03.125 [2024-07-11 21:34:37.862294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7d700 (9): Bad file descriptor 00:28:03.125 [2024-07-11 21:34:37.862426] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.125 [2024-07-11 21:34:37.864992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:03.125 [2024-07-11 21:34:37.865027] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851610 (9): Bad file descriptor 00:28:03.125 [2024-07-11 21:34:37.865107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.865129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.865152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.865168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.865185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.125 [2024-07-11 21:34:37.865199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.125 [2024-07-11 21:34:37.865216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.865975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.865992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.126 [2024-07-11 21:34:37.866603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.126 [2024-07-11 21:34:37.866619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.127 [2024-07-11 21:34:37.866981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.127 [2024-07-11 21:34:37.866995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.878981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.879034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.879052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.879067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.879084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.879099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.879115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.879135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.879154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.879169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.879186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e19a70 is same with the state(5) to be set 00:28:03.392 [2024-07-11 21:34:37.881233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.881976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.881992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.882007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.882023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.882038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.392 [2024-07-11 21:34:37.882053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.392 [2024-07-11 21:34:37.882068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.882972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.882987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.393 [2024-07-11 21:34:37.883245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.393 [2024-07-11 21:34:37.883260] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec8e20 is same with the state(5) to be set 00:28:03.393 [2024-07-11 21:34:37.885237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:03.393 [2024-07-11 21:34:37.885270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.393 [2024-07-11 21:34:37.885294] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:03.393 [2024-07-11 21:34:37.885347] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf910 (9): Bad file descriptor 00:28:03.393 [2024-07-11 21:34:37.885503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.393 [2024-07-11 21:34:37.885534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24e10 with addr=10.0.0.2, port=4420 00:28:03.393 [2024-07-11 21:34:37.885551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24e10 is same with the state(5) to be set 00:28:03.394 [2024-07-11 21:34:37.885614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2370 (9): Bad file descriptor 00:28:03.394 [2024-07-11 21:34:37.885654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7cfd0 (9): Bad file descriptor 00:28:03.394 [2024-07-11 21:34:37.885711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f24e10 (9): Bad file descriptor 00:28:03.394 [2024-07-11 21:34:37.885856] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.394 [2024-07-11 21:34:37.885934] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.394 [2024-07-11 21:34:37.886771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.394 [2024-07-11 21:34:37.886802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1851610 with addr=10.0.0.2, port=4420 00:28:03.394 [2024-07-11 21:34:37.886819] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851610 is same with the state(5) to be set 00:28:03.394 [2024-07-11 21:34:37.886930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.394 [2024-07-11 21:34:37.886956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d59290 with addr=10.0.0.2, port=4420 00:28:03.394 [2024-07-11 21:34:37.886972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59290 is same with the state(5) to be set 00:28:03.394 [2024-07-11 21:34:37.887078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.394 [2024-07-11 21:34:37.887103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc8b0 with addr=10.0.0.2, port=4420 00:28:03.394 [2024-07-11 21:34:37.887119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc8b0 is same with the state(5) to be set 00:28:03.394 [2024-07-11 21:34:37.887474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.887978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.887992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.394 [2024-07-11 21:34:37.888564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.394 [2024-07-11 21:34:37.888579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.888977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.888992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.889489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.889504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec3160 is same with the state(5) to be set 00:28:03.395 [2024-07-11 21:34:37.890791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.890815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.890836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.395 [2024-07-11 21:34:37.890851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.395 [2024-07-11 21:34:37.890868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.890883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.890904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.890919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.890936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.890951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.890968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.890982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.890998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.396 [2024-07-11 21:34:37.891685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.396 [2024-07-11 21:34:37.891700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.891984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.891998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.892014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.892029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.892046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.892061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.892077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.892095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.892113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.892128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.892144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.397 [2024-07-11 21:34:37.892158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.397 [2024-07-11 21:34:37.892175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.892822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.892836] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d53a00 is same with the state(5) to be set 00:28:03.398 [2024-07-11 21:34:37.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.398 [2024-07-11 21:34:37.894405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.398 [2024-07-11 21:34:37.894422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.894981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.894997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.399 [2024-07-11 21:34:37.895406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.399 [2024-07-11 21:34:37.895421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.895984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.895999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.896015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.896030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.896046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.896060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.896077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.896091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.896108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.400 [2024-07-11 21:34:37.896123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.400 [2024-07-11 21:34:37.896142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d54eb0 is same with the state(5) to be set 00:28:03.400 [2024-07-11 21:34:37.897446] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.400 [2024-07-11 21:34:37.897549] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:28:03.400 [2024-07-11 21:34:37.897883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:28:03.400 [2024-07-11 21:34:37.897915] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:28:03.400 [2024-07-11 21:34:37.897933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:28:03.400 [2024-07-11 21:34:37.898145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.400 [2024-07-11 21:34:37.898176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf910 with addr=10.0.0.2, port=4420 00:28:03.400 [2024-07-11 21:34:37.898194] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf910 is same with the state(5) to be set 00:28:03.400 [2024-07-11 21:34:37.898220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851610 (9): Bad file descriptor 00:28:03.400 [2024-07-11 21:34:37.898240] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d59290 (9): Bad file descriptor 00:28:03.400 [2024-07-11 21:34:37.898259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc8b0 (9): Bad file descriptor 00:28:03.400 [2024-07-11 21:34:37.898276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:03.400 [2024-07-11 21:34:37.898291] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:03.400 [2024-07-11 21:34:37.898309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:03.400 [2024-07-11 21:34:37.898383] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.400 [2024-07-11 21:34:37.898407] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.400 [2024-07-11 21:34:37.898428] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.400 [2024-07-11 21:34:37.898447] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.400 [2024-07-11 21:34:37.898467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf910 (9): Bad file descriptor 00:28:03.400 [2024-07-11 21:34:37.898611] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.400 [2024-07-11 21:34:37.898734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.400 [2024-07-11 21:34:37.898770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7d700 with addr=10.0.0.2, port=4420 00:28:03.400 [2024-07-11 21:34:37.898788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7d700 is same with the state(5) to be set 00:28:03.400 [2024-07-11 21:34:37.898887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.400 [2024-07-11 21:34:37.898913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d95b50 with addr=10.0.0.2, port=4420 00:28:03.401 [2024-07-11 21:34:37.898929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d95b50 is same with the state(5) to be set 00:28:03.401 [2024-07-11 21:34:37.899028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.401 [2024-07-11 21:34:37.899054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d79830 with addr=10.0.0.2, port=4420 00:28:03.401 [2024-07-11 21:34:37.899069] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d79830 is same with the state(5) to be set 00:28:03.401 [2024-07-11 21:34:37.899092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:03.401 [2024-07-11 21:34:37.899106] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:03.401 [2024-07-11 21:34:37.899119] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:03.401 [2024-07-11 21:34:37.899139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.401 [2024-07-11 21:34:37.899154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.401 [2024-07-11 21:34:37.899168] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.401 [2024-07-11 21:34:37.899188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:03.401 [2024-07-11 21:34:37.899202] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:03.401 [2024-07-11 21:34:37.899215] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:03.401 [2024-07-11 21:34:37.900118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.900971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.900986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.401 [2024-07-11 21:34:37.901238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.401 [2024-07-11 21:34:37.901255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.901983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.901998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.902014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.902028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.902044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.902058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.402 [2024-07-11 21:34:37.902074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.402 [2024-07-11 21:34:37.902088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.902333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.902347] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec54a0 is same with the state(5) to be set 00:28:03.403 [2024-07-11 21:34:37.903609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.903979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.903994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.403 [2024-07-11 21:34:37.904338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.403 [2024-07-11 21:34:37.904355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.904975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.904989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.404 [2024-07-11 21:34:37.905395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.404 [2024-07-11 21:34:37.905414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:03.405 [2024-07-11 21:34:37.905610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:03.405 [2024-07-11 21:34:37.905624] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ec7a60 is same with the state(5) to be set 00:28:03.405 [2024-07-11 21:34:37.907962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.405 [2024-07-11 21:34:37.907988] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.405 [2024-07-11 21:34:37.908000] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.405 [2024-07-11 21:34:37.908018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:28:03.405 task offset: 24576 on job bdev=Nvme2n1 fails 00:28:03.405 00:28:03.405 Latency(us) 00:28:03.405 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:03.405 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme1n1 ended in about 0.99 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme1n1 : 0.99 129.84 8.11 64.92 0.00 325254.76 19806.44 281173.71 00:28:03.405 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme2n1 ended in about 0.94 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme2n1 : 0.94 205.18 12.82 68.39 0.00 226789.83 19515.16 254765.13 00:28:03.405 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme3n1 ended in about 1.00 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme3n1 : 1.00 132.52 8.28 64.25 0.00 309989.74 26991.12 315349.52 00:28:03.405 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme4n1 ended in about 1.00 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme4n1 : 1.00 192.12 12.01 64.04 0.00 233563.97 21554.06 248551.35 00:28:03.405 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme5n1 ended in about 1.00 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme5n1 : 1.00 195.47 12.22 63.83 0.00 226256.79 25243.50 233016.89 00:28:03.405 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme6n1 ended in about 0.97 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme6n1 : 0.97 198.09 12.38 66.03 0.00 216847.36 21456.97 257872.02 00:28:03.405 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme7n1 ended in about 1.01 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme7n1 : 1.01 126.87 7.93 63.44 0.00 296467.47 30098.01 278066.82 00:28:03.405 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme8n1 ended in about 0.97 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme8n1 : 0.97 197.86 12.37 65.95 0.00 208233.20 5121.52 257872.02 00:28:03.405 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme9n1 ended in about 1.01 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme9n1 : 1.01 126.46 7.90 63.23 0.00 285751.12 22913.33 276513.37 00:28:03.405 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:03.405 Job: Nvme10n1 ended in about 0.99 seconds with error 00:28:03.405 Verification LBA range: start 0x0 length 0x400 00:28:03.405 Nvme10n1 : 0.99 129.31 8.08 64.66 0.00 272188.30 21359.88 273406.48 00:28:03.405 =================================================================================================================== 00:28:03.405 Total : 1633.72 102.11 648.74 0.00 254782.47 5121.52 315349.52 00:28:03.405 [2024-07-11 21:34:37.936572] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:03.405 [2024-07-11 21:34:37.936663] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:28:03.405 [2024-07-11 21:34:37.936768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7d700 (9): Bad file descriptor 00:28:03.405 [2024-07-11 21:34:37.936800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d95b50 (9): Bad file descriptor 00:28:03.405 [2024-07-11 21:34:37.936820] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d79830 (9): Bad file descriptor 00:28:03.405 [2024-07-11 21:34:37.936837] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:03.405 [2024-07-11 21:34:37.936851] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:03.405 [2024-07-11 21:34:37.936869] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:03.405 [2024-07-11 21:34:37.936961] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.405 [2024-07-11 21:34:37.937097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.405 [2024-07-11 21:34:37.937363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.405 [2024-07-11 21:34:37.937399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d7cfd0 with addr=10.0.0.2, port=4420 00:28:03.405 [2024-07-11 21:34:37.937419] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d7cfd0 is same with the state(5) to be set 00:28:03.405 [2024-07-11 21:34:37.937546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.405 [2024-07-11 21:34:37.937583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db2370 with addr=10.0.0.2, port=4420 00:28:03.405 [2024-07-11 21:34:37.937600] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db2370 is same with the state(5) to be set 00:28:03.405 [2024-07-11 21:34:37.937615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:28:03.405 [2024-07-11 21:34:37.937629] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:28:03.405 [2024-07-11 21:34:37.937642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:28:03.405 [2024-07-11 21:34:37.937661] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:28:03.405 [2024-07-11 21:34:37.937676] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:28:03.405 [2024-07-11 21:34:37.937689] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:28:03.405 [2024-07-11 21:34:37.937706] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.937720] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.937732] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:28:03.406 [2024-07-11 21:34:37.937761] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.937791] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.937811] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.937843] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.937864] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.937883] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.937901] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:28:03.406 [2024-07-11 21:34:37.938510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:28:03.406 [2024-07-11 21:34:37.938547] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:28:03.406 [2024-07-11 21:34:37.938564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:28:03.406 [2024-07-11 21:34:37.938579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:28:03.406 [2024-07-11 21:34:37.938630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.938646] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.938658] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.938698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d7cfd0 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.938720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1db2370 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.939180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.406 [2024-07-11 21:34:37.939211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1f24e10 with addr=10.0.0.2, port=4420 00:28:03.406 [2024-07-11 21:34:37.939228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1f24e10 is same with the state(5) to be set 00:28:03.406 [2024-07-11 21:34:37.939348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.406 [2024-07-11 21:34:37.939375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1edc8b0 with addr=10.0.0.2, port=4420 00:28:03.406 [2024-07-11 21:34:37.939391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1edc8b0 is same with the state(5) to be set 00:28:03.406 [2024-07-11 21:34:37.939496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.406 [2024-07-11 21:34:37.939523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d59290 with addr=10.0.0.2, port=4420 00:28:03.406 [2024-07-11 21:34:37.939539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d59290 is same with the state(5) to be set 00:28:03.406 [2024-07-11 21:34:37.939645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.406 [2024-07-11 21:34:37.939672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1851610 with addr=10.0.0.2, port=4420 00:28:03.406 [2024-07-11 21:34:37.939688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1851610 is same with the state(5) to be set 00:28:03.406 [2024-07-11 21:34:37.939703] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.939716] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.939729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:28:03.406 [2024-07-11 21:34:37.939747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.939772] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.939797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:28:03.406 [2024-07-11 21:34:37.939844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:28:03.406 [2024-07-11 21:34:37.939876] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.939894] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.939919] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f24e10 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.939941] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1edc8b0 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.939959] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d59290 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.939976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1851610 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.940097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:28:03.406 [2024-07-11 21:34:37.940125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1daf910 with addr=10.0.0.2, port=4420 00:28:03.406 [2024-07-11 21:34:37.940141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1daf910 is same with the state(5) to be set 00:28:03.406 [2024-07-11 21:34:37.940156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.940169] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.940182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:28:03.406 [2024-07-11 21:34:37.940200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.940214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.940233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:28:03.406 [2024-07-11 21:34:37.940250] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.940264] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.940276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:28:03.406 [2024-07-11 21:34:37.940291] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.940305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.940318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:28:03.406 [2024-07-11 21:34:37.940354] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.940372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.940384] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.940396] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.406 [2024-07-11 21:34:37.940411] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1daf910 (9): Bad file descriptor 00:28:03.406 [2024-07-11 21:34:37.940452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:28:03.406 [2024-07-11 21:34:37.940470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:28:03.406 [2024-07-11 21:34:37.940484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:28:03.406 [2024-07-11 21:34:37.940522] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:28:03.666 21:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:28:03.666 21:34:38 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 991661 00:28:05.040 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (991661) - No such process 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:05.040 rmmod nvme_tcp 00:28:05.040 rmmod nvme_fabrics 00:28:05.040 rmmod nvme_keyring 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:05.040 21:34:39 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:06.946 00:28:06.946 real 0m7.124s 00:28:06.946 user 0m16.325s 00:28:06.946 sys 0m1.462s 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:28:06.946 ************************************ 00:28:06.946 END TEST nvmf_shutdown_tc3 00:28:06.946 ************************************ 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:28:06.946 00:28:06.946 real 0m27.004s 00:28:06.946 user 1m14.420s 00:28:06.946 sys 0m6.444s 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:06.946 21:34:41 nvmf_tcp.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:06.946 ************************************ 00:28:06.946 END TEST nvmf_shutdown 00:28:06.946 ************************************ 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:06.946 21:34:41 nvmf_tcp -- nvmf/nvmf.sh@86 -- # timing_exit target 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.946 21:34:41 nvmf_tcp -- nvmf/nvmf.sh@88 -- # timing_enter host 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.946 21:34:41 nvmf_tcp -- nvmf/nvmf.sh@90 -- # [[ 0 -eq 0 ]] 00:28:06.946 21:34:41 nvmf_tcp -- nvmf/nvmf.sh@91 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:06.946 21:34:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:06.946 ************************************ 00:28:06.946 START TEST nvmf_multicontroller 00:28:06.946 ************************************ 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:28:06.946 * Looking for test storage... 00:28:06.946 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:06.946 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:28:06.947 21:34:41 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:09.476 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:09.476 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:09.476 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:09.476 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:09.477 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:09.477 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.477 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.266 ms 00:28:09.477 00:28:09.477 --- 10.0.0.2 ping statistics --- 00:28:09.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.477 rtt min/avg/max/mdev = 0.266/0.266/0.266/0.000 ms 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.477 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.477 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.129 ms 00:28:09.477 00:28:09.477 --- 10.0.0.1 ping statistics --- 00:28:09.477 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.477 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=994061 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 994061 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 994061 ']' 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.477 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.477 21:34:43 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.477 [2024-07-11 21:34:43.906038] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:09.477 [2024-07-11 21:34:43.906137] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.477 EAL: No free 2048 kB hugepages reported on node 1 00:28:09.477 [2024-07-11 21:34:43.974305] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:09.477 [2024-07-11 21:34:44.063426] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.477 [2024-07-11 21:34:44.063489] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.477 [2024-07-11 21:34:44.063503] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.477 [2024-07-11 21:34:44.063515] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.477 [2024-07-11 21:34:44.063525] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.477 [2024-07-11 21:34:44.063607] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.477 [2024-07-11 21:34:44.063640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.477 [2024-07-11 21:34:44.063642] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.477 [2024-07-11 21:34:44.211838] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.477 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 Malloc0 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 [2024-07-11 21:34:44.280399] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 [2024-07-11 21:34:44.288266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 Malloc1 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.736 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=994092 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 994092 /var/tmp/bdevperf.sock 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 994092 ']' 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:09.737 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:09.737 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.006 NVMe0n1 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.006 1 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:28:10.006 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.007 request: 00:28:10.007 { 00:28:10.007 "name": "NVMe0", 00:28:10.007 "trtype": "tcp", 00:28:10.007 "traddr": "10.0.0.2", 00:28:10.007 "adrfam": "ipv4", 00:28:10.007 "trsvcid": "4420", 00:28:10.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.007 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:10.007 "hostaddr": "10.0.0.2", 00:28:10.007 "hostsvcid": "60000", 00:28:10.007 "prchk_reftag": false, 00:28:10.007 "prchk_guard": false, 00:28:10.007 "hdgst": false, 00:28:10.007 "ddgst": false, 00:28:10.007 "method": "bdev_nvme_attach_controller", 00:28:10.007 "req_id": 1 00:28:10.007 } 00:28:10.007 Got JSON-RPC error response 00:28:10.007 response: 00:28:10.007 { 00:28:10.007 "code": -114, 00:28:10.007 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:10.007 } 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.007 request: 00:28:10.007 { 00:28:10.007 "name": "NVMe0", 00:28:10.007 "trtype": "tcp", 00:28:10.007 "traddr": "10.0.0.2", 00:28:10.007 "adrfam": "ipv4", 00:28:10.007 "trsvcid": "4420", 00:28:10.007 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:10.007 "hostaddr": "10.0.0.2", 00:28:10.007 "hostsvcid": "60000", 00:28:10.007 "prchk_reftag": false, 00:28:10.007 "prchk_guard": false, 00:28:10.007 "hdgst": false, 00:28:10.007 "ddgst": false, 00:28:10.007 "method": "bdev_nvme_attach_controller", 00:28:10.007 "req_id": 1 00:28:10.007 } 00:28:10.007 Got JSON-RPC error response 00:28:10.007 response: 00:28:10.007 { 00:28:10.007 "code": -114, 00:28:10.007 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:10.007 } 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.007 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.310 request: 00:28:10.310 { 00:28:10.310 "name": "NVMe0", 00:28:10.310 "trtype": "tcp", 00:28:10.310 "traddr": "10.0.0.2", 00:28:10.310 "adrfam": "ipv4", 00:28:10.310 "trsvcid": "4420", 00:28:10.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.310 "hostaddr": "10.0.0.2", 00:28:10.310 "hostsvcid": "60000", 00:28:10.310 "prchk_reftag": false, 00:28:10.310 "prchk_guard": false, 00:28:10.310 "hdgst": false, 00:28:10.310 "ddgst": false, 00:28:10.310 "multipath": "disable", 00:28:10.310 "method": "bdev_nvme_attach_controller", 00:28:10.310 "req_id": 1 00:28:10.310 } 00:28:10.310 Got JSON-RPC error response 00:28:10.310 response: 00:28:10.310 { 00:28:10.310 "code": -114, 00:28:10.310 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:28:10.310 } 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:28:10.310 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.311 request: 00:28:10.311 { 00:28:10.311 "name": "NVMe0", 00:28:10.311 "trtype": "tcp", 00:28:10.311 "traddr": "10.0.0.2", 00:28:10.311 "adrfam": "ipv4", 00:28:10.311 "trsvcid": "4420", 00:28:10.311 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:10.311 "hostaddr": "10.0.0.2", 00:28:10.311 "hostsvcid": "60000", 00:28:10.311 "prchk_reftag": false, 00:28:10.311 "prchk_guard": false, 00:28:10.311 "hdgst": false, 00:28:10.311 "ddgst": false, 00:28:10.311 "multipath": "failover", 00:28:10.311 "method": "bdev_nvme_attach_controller", 00:28:10.311 "req_id": 1 00:28:10.311 } 00:28:10.311 Got JSON-RPC error response 00:28:10.311 response: 00:28:10.311 { 00:28:10.311 "code": -114, 00:28:10.311 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:28:10.311 } 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.311 21:34:44 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.311 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.311 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.568 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:28:10.568 21:34:45 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:11.942 0 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 994092 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 994092 ']' 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 994092 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994092 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994092' 00:28:11.942 killing process with pid 994092 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 994092 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 994092 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:28:11.942 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:28:11.942 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:11.942 [2024-07-11 21:34:44.393080] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:11.942 [2024-07-11 21:34:44.393165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid994092 ] 00:28:11.942 EAL: No free 2048 kB hugepages reported on node 1 00:28:11.942 [2024-07-11 21:34:44.455202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:11.942 [2024-07-11 21:34:44.544532] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:11.942 [2024-07-11 21:34:45.263159] bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d03201c9-fb54-461d-9f8a-5999538c0b39 already exists 00:28:11.942 [2024-07-11 21:34:45.263206] bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d03201c9-fb54-461d-9f8a-5999538c0b39 alias for bdev NVMe1n1 00:28:11.942 [2024-07-11 21:34:45.263229] bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:11.942 Running I/O for 1 seconds... 00:28:11.942 00:28:11.942 Latency(us) 00:28:11.942 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.942 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:11.942 NVMe0n1 : 1.01 19108.82 74.64 0.00 0.00 6679.59 2111.72 11893.57 00:28:11.942 =================================================================================================================== 00:28:11.943 Total : 19108.82 74.64 0.00 0.00 6679.59 2111.72 11893.57 00:28:11.943 Received shutdown signal, test time was about 1.000000 seconds 00:28:11.943 00:28:11.943 Latency(us) 00:28:11.943 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:11.943 =================================================================================================================== 00:28:11.943 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:11.943 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:11.943 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:11.943 rmmod nvme_tcp 00:28:11.943 rmmod nvme_fabrics 00:28:12.200 rmmod nvme_keyring 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 994061 ']' 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 994061 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 994061 ']' 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 994061 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 994061 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 994061' 00:28:12.200 killing process with pid 994061 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 994061 00:28:12.200 21:34:46 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 994061 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:12.458 21:34:47 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.356 21:34:49 nvmf_tcp.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:14.356 00:28:14.356 real 0m7.499s 00:28:14.356 user 0m11.946s 00:28:14.356 sys 0m2.289s 00:28:14.356 21:34:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:14.356 21:34:49 nvmf_tcp.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:14.356 ************************************ 00:28:14.356 END TEST nvmf_multicontroller 00:28:14.356 ************************************ 00:28:14.356 21:34:49 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:14.356 21:34:49 nvmf_tcp -- nvmf/nvmf.sh@92 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:14.356 21:34:49 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:14.356 21:34:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:14.356 21:34:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:14.356 ************************************ 00:28:14.356 START TEST nvmf_aer 00:28:14.356 ************************************ 00:28:14.356 21:34:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:28:14.612 * Looking for test storage... 00:28:14.612 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:14.612 21:34:49 nvmf_tcp.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:28:14.613 21:34:49 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.516 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:16.517 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:16.517 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:16.517 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:16.517 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:16.517 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:16.775 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:16.775 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:16.775 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:16.775 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:16.775 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:28:16.775 00:28:16.775 --- 10.0.0.2 ping statistics --- 00:28:16.775 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.775 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:16.775 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:16.775 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:16.775 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:28:16.775 00:28:16.776 --- 10.0.0.1 ping statistics --- 00:28:16.776 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:16.776 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=996390 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 996390 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 996390 ']' 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:16.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:16.776 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:16.776 [2024-07-11 21:34:51.381247] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:16.776 [2024-07-11 21:34:51.381351] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:16.776 EAL: No free 2048 kB hugepages reported on node 1 00:28:16.776 [2024-07-11 21:34:51.451971] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:16.776 [2024-07-11 21:34:51.543957] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:16.776 [2024-07-11 21:34:51.544017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:16.776 [2024-07-11 21:34:51.544043] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:16.776 [2024-07-11 21:34:51.544057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:16.776 [2024-07-11 21:34:51.544070] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:16.776 [2024-07-11 21:34:51.544152] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:16.776 [2024-07-11 21:34:51.544222] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:16.776 [2024-07-11 21:34:51.544271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:16.776 [2024-07-11 21:34:51.544274] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 [2024-07-11 21:34:51.700683] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 Malloc0 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 [2024-07-11 21:34:51.751983] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.034 [ 00:28:17.034 { 00:28:17.034 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:17.034 "subtype": "Discovery", 00:28:17.034 "listen_addresses": [], 00:28:17.034 "allow_any_host": true, 00:28:17.034 "hosts": [] 00:28:17.034 }, 00:28:17.034 { 00:28:17.034 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.034 "subtype": "NVMe", 00:28:17.034 "listen_addresses": [ 00:28:17.034 { 00:28:17.034 "trtype": "TCP", 00:28:17.034 "adrfam": "IPv4", 00:28:17.034 "traddr": "10.0.0.2", 00:28:17.034 "trsvcid": "4420" 00:28:17.034 } 00:28:17.034 ], 00:28:17.034 "allow_any_host": true, 00:28:17.034 "hosts": [], 00:28:17.034 "serial_number": "SPDK00000000000001", 00:28:17.034 "model_number": "SPDK bdev Controller", 00:28:17.034 "max_namespaces": 2, 00:28:17.034 "min_cntlid": 1, 00:28:17.034 "max_cntlid": 65519, 00:28:17.034 "namespaces": [ 00:28:17.034 { 00:28:17.034 "nsid": 1, 00:28:17.034 "bdev_name": "Malloc0", 00:28:17.034 "name": "Malloc0", 00:28:17.034 "nguid": "922170D5B151491DBA12430183916330", 00:28:17.034 "uuid": "922170d5-b151-491d-ba12-430183916330" 00:28:17.034 } 00:28:17.034 ] 00:28:17.034 } 00:28:17.034 ] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@33 -- # aerpid=996443 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:28:17.034 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:17.291 EAL: No free 2048 kB hugepages reported on node 1 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 2 -lt 200 ']' 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1268 -- # i=3 00:28:17.291 21:34:51 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 Malloc1 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.549 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.549 [ 00:28:17.549 { 00:28:17.549 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:17.549 "subtype": "Discovery", 00:28:17.549 "listen_addresses": [], 00:28:17.549 "allow_any_host": true, 00:28:17.549 "hosts": [] 00:28:17.549 }, 00:28:17.549 { 00:28:17.549 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:17.549 "subtype": "NVMe", 00:28:17.549 "listen_addresses": [ 00:28:17.549 { 00:28:17.549 "trtype": "TCP", 00:28:17.549 "adrfam": "IPv4", 00:28:17.549 "traddr": "10.0.0.2", 00:28:17.549 "trsvcid": "4420" 00:28:17.550 } 00:28:17.550 ], 00:28:17.550 "allow_any_host": true, 00:28:17.550 "hosts": [], 00:28:17.550 "serial_number": "SPDK00000000000001", 00:28:17.550 "model_number": "SPDK bdev Controller", 00:28:17.550 "max_namespaces": 2, 00:28:17.550 "min_cntlid": 1, 00:28:17.550 "max_cntlid": 65519, 00:28:17.550 "namespaces": [ 00:28:17.550 { 00:28:17.550 "nsid": 1, 00:28:17.550 "bdev_name": "Malloc0", 00:28:17.550 "name": "Malloc0", 00:28:17.550 "nguid": "922170D5B151491DBA12430183916330", 00:28:17.550 "uuid": "922170d5-b151-491d-ba12-430183916330" 00:28:17.550 }, 00:28:17.550 { 00:28:17.550 "nsid": 2, 00:28:17.550 "bdev_name": "Malloc1", 00:28:17.550 "name": "Malloc1", 00:28:17.550 "nguid": "A266A4AFD91C44CB9639589D2672115E", 00:28:17.550 "uuid": "a266a4af-d91c-44cb-9639-589d2672115e" 00:28:17.550 } 00:28:17.550 ] 00:28:17.550 } 00:28:17.550 ] 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@43 -- # wait 996443 00:28:17.550 Asynchronous Event Request test 00:28:17.550 Attaching to 10.0.0.2 00:28:17.550 Attached to 10.0.0.2 00:28:17.550 Registering asynchronous event callbacks... 00:28:17.550 Starting namespace attribute notice tests for all controllers... 00:28:17.550 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:28:17.550 aer_cb - Changed Namespace 00:28:17.550 Cleaning up... 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:17.550 rmmod nvme_tcp 00:28:17.550 rmmod nvme_fabrics 00:28:17.550 rmmod nvme_keyring 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 996390 ']' 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 996390 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 996390 ']' 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 996390 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 996390 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 996390' 00:28:17.550 killing process with pid 996390 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@967 -- # kill 996390 00:28:17.550 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@972 -- # wait 996390 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:17.808 21:34:52 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.340 21:34:54 nvmf_tcp.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:20.340 00:28:20.340 real 0m5.464s 00:28:20.340 user 0m4.583s 00:28:20.340 sys 0m1.931s 00:28:20.340 21:34:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:20.340 21:34:54 nvmf_tcp.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:28:20.340 ************************************ 00:28:20.340 END TEST nvmf_aer 00:28:20.340 ************************************ 00:28:20.340 21:34:54 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:20.340 21:34:54 nvmf_tcp -- nvmf/nvmf.sh@93 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:20.340 21:34:54 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:20.340 21:34:54 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.340 21:34:54 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:20.340 ************************************ 00:28:20.340 START TEST nvmf_async_init 00:28:20.340 ************************************ 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:28:20.340 * Looking for test storage... 00:28:20.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:28:20.340 21:34:54 nvmf_tcp.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@20 -- # nguid=ca985eb14b9b4220bd2960d2ed744e70 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:28:20.341 21:34:54 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:22.238 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:22.238 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:22.238 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:22.238 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:22.238 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:22.238 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:28:22.238 00:28:22.238 --- 10.0.0.2 ping statistics --- 00:28:22.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.238 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:22.238 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:22.238 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.111 ms 00:28:22.238 00:28:22.238 --- 10.0.0.1 ping statistics --- 00:28:22.238 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:22.238 rtt min/avg/max/mdev = 0.111/0.111/0.111/0.000 ms 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=998381 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 998381 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 998381 ']' 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:22.238 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:22.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.239 [2024-07-11 21:34:56.714063] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:22.239 [2024-07-11 21:34:56.714161] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:22.239 EAL: No free 2048 kB hugepages reported on node 1 00:28:22.239 [2024-07-11 21:34:56.778959] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:22.239 [2024-07-11 21:34:56.862409] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:22.239 [2024-07-11 21:34:56.862463] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:22.239 [2024-07-11 21:34:56.862488] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:22.239 [2024-07-11 21:34:56.862499] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:22.239 [2024-07-11 21:34:56.862508] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:22.239 [2024-07-11 21:34:56.862533] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.239 21:34:56 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.239 [2024-07-11 21:34:57.000249] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:22.239 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.239 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:28:22.239 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.239 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.495 null0 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g ca985eb14b9b4220bd2960d2ed744e70 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.495 [2024-07-11 21:34:57.040475] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.495 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.752 nvme0n1 00:28:22.752 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.752 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.753 [ 00:28:22.753 { 00:28:22.753 "name": "nvme0n1", 00:28:22.753 "aliases": [ 00:28:22.753 "ca985eb1-4b9b-4220-bd29-60d2ed744e70" 00:28:22.753 ], 00:28:22.753 "product_name": "NVMe disk", 00:28:22.753 "block_size": 512, 00:28:22.753 "num_blocks": 2097152, 00:28:22.753 "uuid": "ca985eb1-4b9b-4220-bd29-60d2ed744e70", 00:28:22.753 "assigned_rate_limits": { 00:28:22.753 "rw_ios_per_sec": 0, 00:28:22.753 "rw_mbytes_per_sec": 0, 00:28:22.753 "r_mbytes_per_sec": 0, 00:28:22.753 "w_mbytes_per_sec": 0 00:28:22.753 }, 00:28:22.753 "claimed": false, 00:28:22.753 "zoned": false, 00:28:22.753 "supported_io_types": { 00:28:22.753 "read": true, 00:28:22.753 "write": true, 00:28:22.753 "unmap": false, 00:28:22.753 "flush": true, 00:28:22.753 "reset": true, 00:28:22.753 "nvme_admin": true, 00:28:22.753 "nvme_io": true, 00:28:22.753 "nvme_io_md": false, 00:28:22.753 "write_zeroes": true, 00:28:22.753 "zcopy": false, 00:28:22.753 "get_zone_info": false, 00:28:22.753 "zone_management": false, 00:28:22.753 "zone_append": false, 00:28:22.753 "compare": true, 00:28:22.753 "compare_and_write": true, 00:28:22.753 "abort": true, 00:28:22.753 "seek_hole": false, 00:28:22.753 "seek_data": false, 00:28:22.753 "copy": true, 00:28:22.753 "nvme_iov_md": false 00:28:22.753 }, 00:28:22.753 "memory_domains": [ 00:28:22.753 { 00:28:22.753 "dma_device_id": "system", 00:28:22.753 "dma_device_type": 1 00:28:22.753 } 00:28:22.753 ], 00:28:22.753 "driver_specific": { 00:28:22.753 "nvme": [ 00:28:22.753 { 00:28:22.753 "trid": { 00:28:22.753 "trtype": "TCP", 00:28:22.753 "adrfam": "IPv4", 00:28:22.753 "traddr": "10.0.0.2", 00:28:22.753 "trsvcid": "4420", 00:28:22.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:22.753 }, 00:28:22.753 "ctrlr_data": { 00:28:22.753 "cntlid": 1, 00:28:22.753 "vendor_id": "0x8086", 00:28:22.753 "model_number": "SPDK bdev Controller", 00:28:22.753 "serial_number": "00000000000000000000", 00:28:22.753 "firmware_revision": "24.09", 00:28:22.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:22.753 "oacs": { 00:28:22.753 "security": 0, 00:28:22.753 "format": 0, 00:28:22.753 "firmware": 0, 00:28:22.753 "ns_manage": 0 00:28:22.753 }, 00:28:22.753 "multi_ctrlr": true, 00:28:22.753 "ana_reporting": false 00:28:22.753 }, 00:28:22.753 "vs": { 00:28:22.753 "nvme_version": "1.3" 00:28:22.753 }, 00:28:22.753 "ns_data": { 00:28:22.753 "id": 1, 00:28:22.753 "can_share": true 00:28:22.753 } 00:28:22.753 } 00:28:22.753 ], 00:28:22.753 "mp_policy": "active_passive" 00:28:22.753 } 00:28:22.753 } 00:28:22.753 ] 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.753 [2024-07-11 21:34:57.293720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:28:22.753 [2024-07-11 21:34:57.293838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2558500 (9): Bad file descriptor 00:28:22.753 [2024-07-11 21:34:57.425885] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.753 [ 00:28:22.753 { 00:28:22.753 "name": "nvme0n1", 00:28:22.753 "aliases": [ 00:28:22.753 "ca985eb1-4b9b-4220-bd29-60d2ed744e70" 00:28:22.753 ], 00:28:22.753 "product_name": "NVMe disk", 00:28:22.753 "block_size": 512, 00:28:22.753 "num_blocks": 2097152, 00:28:22.753 "uuid": "ca985eb1-4b9b-4220-bd29-60d2ed744e70", 00:28:22.753 "assigned_rate_limits": { 00:28:22.753 "rw_ios_per_sec": 0, 00:28:22.753 "rw_mbytes_per_sec": 0, 00:28:22.753 "r_mbytes_per_sec": 0, 00:28:22.753 "w_mbytes_per_sec": 0 00:28:22.753 }, 00:28:22.753 "claimed": false, 00:28:22.753 "zoned": false, 00:28:22.753 "supported_io_types": { 00:28:22.753 "read": true, 00:28:22.753 "write": true, 00:28:22.753 "unmap": false, 00:28:22.753 "flush": true, 00:28:22.753 "reset": true, 00:28:22.753 "nvme_admin": true, 00:28:22.753 "nvme_io": true, 00:28:22.753 "nvme_io_md": false, 00:28:22.753 "write_zeroes": true, 00:28:22.753 "zcopy": false, 00:28:22.753 "get_zone_info": false, 00:28:22.753 "zone_management": false, 00:28:22.753 "zone_append": false, 00:28:22.753 "compare": true, 00:28:22.753 "compare_and_write": true, 00:28:22.753 "abort": true, 00:28:22.753 "seek_hole": false, 00:28:22.753 "seek_data": false, 00:28:22.753 "copy": true, 00:28:22.753 "nvme_iov_md": false 00:28:22.753 }, 00:28:22.753 "memory_domains": [ 00:28:22.753 { 00:28:22.753 "dma_device_id": "system", 00:28:22.753 "dma_device_type": 1 00:28:22.753 } 00:28:22.753 ], 00:28:22.753 "driver_specific": { 00:28:22.753 "nvme": [ 00:28:22.753 { 00:28:22.753 "trid": { 00:28:22.753 "trtype": "TCP", 00:28:22.753 "adrfam": "IPv4", 00:28:22.753 "traddr": "10.0.0.2", 00:28:22.753 "trsvcid": "4420", 00:28:22.753 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:22.753 }, 00:28:22.753 "ctrlr_data": { 00:28:22.753 "cntlid": 2, 00:28:22.753 "vendor_id": "0x8086", 00:28:22.753 "model_number": "SPDK bdev Controller", 00:28:22.753 "serial_number": "00000000000000000000", 00:28:22.753 "firmware_revision": "24.09", 00:28:22.753 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:22.753 "oacs": { 00:28:22.753 "security": 0, 00:28:22.753 "format": 0, 00:28:22.753 "firmware": 0, 00:28:22.753 "ns_manage": 0 00:28:22.753 }, 00:28:22.753 "multi_ctrlr": true, 00:28:22.753 "ana_reporting": false 00:28:22.753 }, 00:28:22.753 "vs": { 00:28:22.753 "nvme_version": "1.3" 00:28:22.753 }, 00:28:22.753 "ns_data": { 00:28:22.753 "id": 1, 00:28:22.753 "can_share": true 00:28:22.753 } 00:28:22.753 } 00:28:22.753 ], 00:28:22.753 "mp_policy": "active_passive" 00:28:22.753 } 00:28:22.753 } 00:28:22.753 ] 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.753 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.jy7IOLRMe0 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.jy7IOLRMe0 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 [2024-07-11 21:34:57.474395] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:28:22.754 [2024-07-11 21:34:57.474538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jy7IOLRMe0 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 [2024-07-11 21:34:57.482411] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.jy7IOLRMe0 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:22.754 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:22.754 [2024-07-11 21:34:57.490432] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:28:22.754 [2024-07-11 21:34:57.490488] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:28:23.011 nvme0n1 00:28:23.011 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.011 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:28:23.011 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.011 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.011 [ 00:28:23.011 { 00:28:23.011 "name": "nvme0n1", 00:28:23.011 "aliases": [ 00:28:23.011 "ca985eb1-4b9b-4220-bd29-60d2ed744e70" 00:28:23.011 ], 00:28:23.011 "product_name": "NVMe disk", 00:28:23.011 "block_size": 512, 00:28:23.011 "num_blocks": 2097152, 00:28:23.011 "uuid": "ca985eb1-4b9b-4220-bd29-60d2ed744e70", 00:28:23.011 "assigned_rate_limits": { 00:28:23.011 "rw_ios_per_sec": 0, 00:28:23.011 "rw_mbytes_per_sec": 0, 00:28:23.011 "r_mbytes_per_sec": 0, 00:28:23.011 "w_mbytes_per_sec": 0 00:28:23.011 }, 00:28:23.011 "claimed": false, 00:28:23.011 "zoned": false, 00:28:23.011 "supported_io_types": { 00:28:23.011 "read": true, 00:28:23.011 "write": true, 00:28:23.011 "unmap": false, 00:28:23.011 "flush": true, 00:28:23.011 "reset": true, 00:28:23.011 "nvme_admin": true, 00:28:23.011 "nvme_io": true, 00:28:23.011 "nvme_io_md": false, 00:28:23.011 "write_zeroes": true, 00:28:23.011 "zcopy": false, 00:28:23.011 "get_zone_info": false, 00:28:23.011 "zone_management": false, 00:28:23.011 "zone_append": false, 00:28:23.011 "compare": true, 00:28:23.011 "compare_and_write": true, 00:28:23.011 "abort": true, 00:28:23.011 "seek_hole": false, 00:28:23.011 "seek_data": false, 00:28:23.011 "copy": true, 00:28:23.011 "nvme_iov_md": false 00:28:23.011 }, 00:28:23.011 "memory_domains": [ 00:28:23.011 { 00:28:23.011 "dma_device_id": "system", 00:28:23.011 "dma_device_type": 1 00:28:23.011 } 00:28:23.011 ], 00:28:23.011 "driver_specific": { 00:28:23.011 "nvme": [ 00:28:23.011 { 00:28:23.011 "trid": { 00:28:23.011 "trtype": "TCP", 00:28:23.011 "adrfam": "IPv4", 00:28:23.011 "traddr": "10.0.0.2", 00:28:23.011 "trsvcid": "4421", 00:28:23.011 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:28:23.011 }, 00:28:23.011 "ctrlr_data": { 00:28:23.011 "cntlid": 3, 00:28:23.011 "vendor_id": "0x8086", 00:28:23.011 "model_number": "SPDK bdev Controller", 00:28:23.011 "serial_number": "00000000000000000000", 00:28:23.011 "firmware_revision": "24.09", 00:28:23.011 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:23.011 "oacs": { 00:28:23.011 "security": 0, 00:28:23.011 "format": 0, 00:28:23.011 "firmware": 0, 00:28:23.011 "ns_manage": 0 00:28:23.011 }, 00:28:23.011 "multi_ctrlr": true, 00:28:23.011 "ana_reporting": false 00:28:23.011 }, 00:28:23.012 "vs": { 00:28:23.012 "nvme_version": "1.3" 00:28:23.012 }, 00:28:23.012 "ns_data": { 00:28:23.012 "id": 1, 00:28:23.012 "can_share": true 00:28:23.012 } 00:28:23.012 } 00:28:23.012 ], 00:28:23.012 "mp_policy": "active_passive" 00:28:23.012 } 00:28:23.012 } 00:28:23.012 ] 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.jy7IOLRMe0 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:23.012 rmmod nvme_tcp 00:28:23.012 rmmod nvme_fabrics 00:28:23.012 rmmod nvme_keyring 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 998381 ']' 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 998381 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 998381 ']' 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 998381 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 998381 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 998381' 00:28:23.012 killing process with pid 998381 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 998381 00:28:23.012 [2024-07-11 21:34:57.689413] app.c:1023:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:28:23.012 [2024-07-11 21:34:57.689452] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:28:23.012 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 998381 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:23.270 21:34:57 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.178 21:34:59 nvmf_tcp.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:25.436 00:28:25.436 real 0m5.315s 00:28:25.436 user 0m1.962s 00:28:25.436 sys 0m1.699s 00:28:25.436 21:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.436 21:34:59 nvmf_tcp.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 ************************************ 00:28:25.436 END TEST nvmf_async_init 00:28:25.436 ************************************ 00:28:25.436 21:34:59 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:25.436 21:34:59 nvmf_tcp -- nvmf/nvmf.sh@94 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:25.436 21:34:59 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:25.436 21:34:59 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.436 21:34:59 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.436 ************************************ 00:28:25.436 START TEST dma 00:28:25.436 ************************************ 00:28:25.436 21:35:00 nvmf_tcp.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:28:25.436 * Looking for test storage... 00:28:25.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.436 21:35:00 nvmf_tcp.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # uname -s 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.436 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.437 21:35:00 nvmf_tcp.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.437 21:35:00 nvmf_tcp.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.437 21:35:00 nvmf_tcp.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.437 21:35:00 nvmf_tcp.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.dma -- paths/export.sh@5 -- # export PATH 00:28:25.437 21:35:00 nvmf_tcp.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@47 -- # : 0 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.437 21:35:00 nvmf_tcp.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.437 21:35:00 nvmf_tcp.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:28:25.437 21:35:00 nvmf_tcp.dma -- host/dma.sh@13 -- # exit 0 00:28:25.437 00:28:25.437 real 0m0.069s 00:28:25.437 user 0m0.032s 00:28:25.437 sys 0m0.042s 00:28:25.437 21:35:00 nvmf_tcp.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:25.437 21:35:00 nvmf_tcp.dma -- common/autotest_common.sh@10 -- # set +x 00:28:25.437 ************************************ 00:28:25.437 END TEST dma 00:28:25.437 ************************************ 00:28:25.437 21:35:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:25.437 21:35:00 nvmf_tcp -- nvmf/nvmf.sh@97 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:25.437 21:35:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:25.437 21:35:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:25.437 21:35:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:25.437 ************************************ 00:28:25.437 START TEST nvmf_identify 00:28:25.437 ************************************ 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:28:25.437 * Looking for test storage... 00:28:25.437 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:28:25.437 21:35:00 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:27.964 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:27.964 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:27.964 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:27.964 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:27.964 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:27.965 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.965 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:28:27.965 00:28:27.965 --- 10.0.0.2 ping statistics --- 00:28:27.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.965 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:27.965 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.965 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.175 ms 00:28:27.965 00:28:27.965 --- 10.0.0.1 ping statistics --- 00:28:27.965 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.965 rtt min/avg/max/mdev = 0.175/0.175/0.175/0.000 ms 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=1000506 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 1000506 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 1000506 ']' 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:27.965 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:27.965 [2024-07-11 21:35:02.526348] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:27.965 [2024-07-11 21:35:02.526420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.965 EAL: No free 2048 kB hugepages reported on node 1 00:28:27.965 [2024-07-11 21:35:02.590248] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:27.965 [2024-07-11 21:35:02.676472] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.965 [2024-07-11 21:35:02.676524] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.965 [2024-07-11 21:35:02.676537] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.965 [2024-07-11 21:35:02.676549] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.965 [2024-07-11 21:35:02.676559] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.965 [2024-07-11 21:35:02.676687] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.965 [2024-07-11 21:35:02.676764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.965 [2024-07-11 21:35:02.676820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.965 [2024-07-11 21:35:02.676825] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 [2024-07-11 21:35:02.800522] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 Malloc0 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 [2024-07-11 21:35:02.877521] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.224 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.224 [ 00:28:28.224 { 00:28:28.224 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:28.224 "subtype": "Discovery", 00:28:28.224 "listen_addresses": [ 00:28:28.224 { 00:28:28.224 "trtype": "TCP", 00:28:28.224 "adrfam": "IPv4", 00:28:28.224 "traddr": "10.0.0.2", 00:28:28.224 "trsvcid": "4420" 00:28:28.225 } 00:28:28.225 ], 00:28:28.225 "allow_any_host": true, 00:28:28.225 "hosts": [] 00:28:28.225 }, 00:28:28.225 { 00:28:28.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:28.225 "subtype": "NVMe", 00:28:28.225 "listen_addresses": [ 00:28:28.225 { 00:28:28.225 "trtype": "TCP", 00:28:28.225 "adrfam": "IPv4", 00:28:28.225 "traddr": "10.0.0.2", 00:28:28.225 "trsvcid": "4420" 00:28:28.225 } 00:28:28.225 ], 00:28:28.225 "allow_any_host": true, 00:28:28.225 "hosts": [], 00:28:28.225 "serial_number": "SPDK00000000000001", 00:28:28.225 "model_number": "SPDK bdev Controller", 00:28:28.225 "max_namespaces": 32, 00:28:28.225 "min_cntlid": 1, 00:28:28.225 "max_cntlid": 65519, 00:28:28.225 "namespaces": [ 00:28:28.225 { 00:28:28.225 "nsid": 1, 00:28:28.225 "bdev_name": "Malloc0", 00:28:28.225 "name": "Malloc0", 00:28:28.225 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:28:28.225 "eui64": "ABCDEF0123456789", 00:28:28.225 "uuid": "99a11396-0341-4a9e-ae15-54f95c2a7670" 00:28:28.225 } 00:28:28.225 ] 00:28:28.225 } 00:28:28.225 ] 00:28:28.225 21:35:02 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.225 21:35:02 nvmf_tcp.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:28:28.225 [2024-07-11 21:35:02.919679] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:28.225 [2024-07-11 21:35:02.919724] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000643 ] 00:28:28.225 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.225 [2024-07-11 21:35:02.956391] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:28:28.225 [2024-07-11 21:35:02.956468] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:28.225 [2024-07-11 21:35:02.956478] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:28.225 [2024-07-11 21:35:02.956495] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:28.225 [2024-07-11 21:35:02.956506] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:28.225 [2024-07-11 21:35:02.956811] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:28:28.225 [2024-07-11 21:35:02.956878] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xb4ffe0 0 00:28:28.225 [2024-07-11 21:35:02.967782] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:28.225 [2024-07-11 21:35:02.967804] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:28.225 [2024-07-11 21:35:02.967813] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:28.225 [2024-07-11 21:35:02.967820] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:28.225 [2024-07-11 21:35:02.967880] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.967894] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.967903] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.967922] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:28.225 [2024-07-11 21:35:02.967948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.975783] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.975801] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.975808] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.975817] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.225 [2024-07-11 21:35:02.975839] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:28.225 [2024-07-11 21:35:02.975851] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:28:28.225 [2024-07-11 21:35:02.975862] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:28:28.225 [2024-07-11 21:35:02.975886] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.975895] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.975902] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.975913] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.225 [2024-07-11 21:35:02.975936] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.976058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.976074] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.976080] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.225 [2024-07-11 21:35:02.976097] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:28:28.225 [2024-07-11 21:35:02.976110] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:28:28.225 [2024-07-11 21:35:02.976123] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976130] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976136] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.976147] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.225 [2024-07-11 21:35:02.976168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.976275] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.976287] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.976294] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976305] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.225 [2024-07-11 21:35:02.976315] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:28:28.225 [2024-07-11 21:35:02.976329] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:28:28.225 [2024-07-11 21:35:02.976341] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976349] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976355] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.976365] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.225 [2024-07-11 21:35:02.976386] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.976480] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.976495] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.976502] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976509] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.225 [2024-07-11 21:35:02.976518] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:28.225 [2024-07-11 21:35:02.976535] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976544] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976550] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.976561] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.225 [2024-07-11 21:35:02.976581] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.976673] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.976685] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.976692] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976698] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.225 [2024-07-11 21:35:02.976708] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:28:28.225 [2024-07-11 21:35:02.976717] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:28:28.225 [2024-07-11 21:35:02.976729] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:28.225 [2024-07-11 21:35:02.976840] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:28:28.225 [2024-07-11 21:35:02.976851] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:28.225 [2024-07-11 21:35:02.976866] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976874] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.976880] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.976891] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.225 [2024-07-11 21:35:02.976912] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.977011] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.977026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.977033] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.977040] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.225 [2024-07-11 21:35:02.977048] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:28.225 [2024-07-11 21:35:02.977064] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.977073] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.225 [2024-07-11 21:35:02.977079] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.225 [2024-07-11 21:35:02.977090] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.225 [2024-07-11 21:35:02.977110] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.225 [2024-07-11 21:35:02.977203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.225 [2024-07-11 21:35:02.977219] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.225 [2024-07-11 21:35:02.977225] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.226 [2024-07-11 21:35:02.977232] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.226 [2024-07-11 21:35:02.977241] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:28.226 [2024-07-11 21:35:02.977250] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:28:28.226 [2024-07-11 21:35:02.977264] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:28:28.226 [2024-07-11 21:35:02.977278] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:28:28.226 [2024-07-11 21:35:02.977295] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.226 [2024-07-11 21:35:02.977303] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.226 [2024-07-11 21:35:02.977314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.226 [2024-07-11 21:35:02.977335] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.226 [2024-07-11 21:35:02.977479] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.226 [2024-07-11 21:35:02.977494] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.226 [2024-07-11 21:35:02.977501] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.226 [2024-07-11 21:35:02.977508] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb4ffe0): datao=0, datal=4096, cccid=0 00:28:28.226 [2024-07-11 21:35:02.977517] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb6880) on tqpair(0xb4ffe0): expected_datao=0, payload_size=4096 00:28:28.226 [2024-07-11 21:35:02.977525] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.226 [2024-07-11 21:35:02.977543] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.226 [2024-07-11 21:35:02.977553] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.017844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.486 [2024-07-11 21:35:03.017864] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.486 [2024-07-11 21:35:03.017872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.017879] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.486 [2024-07-11 21:35:03.017899] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:28:28.486 [2024-07-11 21:35:03.017913] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:28:28.486 [2024-07-11 21:35:03.017922] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:28:28.486 [2024-07-11 21:35:03.017932] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:28:28.486 [2024-07-11 21:35:03.017941] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:28:28.486 [2024-07-11 21:35:03.017949] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:28:28.486 [2024-07-11 21:35:03.017964] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:28:28.486 [2024-07-11 21:35:03.017977] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.017985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.017991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018003] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:28.486 [2024-07-11 21:35:03.018026] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.486 [2024-07-11 21:35:03.018125] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.486 [2024-07-11 21:35:03.018138] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.486 [2024-07-11 21:35:03.018144] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018151] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.486 [2024-07-11 21:35:03.018165] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018173] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018179] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.486 [2024-07-11 21:35:03.018199] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018213] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.486 [2024-07-11 21:35:03.018231] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018238] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018244] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.486 [2024-07-11 21:35:03.018262] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018275] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018284] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.486 [2024-07-11 21:35:03.018292] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:28:28.486 [2024-07-11 21:35:03.018315] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:28.486 [2024-07-11 21:35:03.018329] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018336] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018346] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.486 [2024-07-11 21:35:03.018369] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6880, cid 0, qid 0 00:28:28.486 [2024-07-11 21:35:03.018381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6a00, cid 1, qid 0 00:28:28.486 [2024-07-11 21:35:03.018389] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6b80, cid 2, qid 0 00:28:28.486 [2024-07-11 21:35:03.018396] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.486 [2024-07-11 21:35:03.018404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6e80, cid 4, qid 0 00:28:28.486 [2024-07-11 21:35:03.018526] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.486 [2024-07-11 21:35:03.018538] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.486 [2024-07-11 21:35:03.018545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6e80) on tqpair=0xb4ffe0 00:28:28.486 [2024-07-11 21:35:03.018562] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:28:28.486 [2024-07-11 21:35:03.018571] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:28:28.486 [2024-07-11 21:35:03.018588] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.018598] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.018608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.486 [2024-07-11 21:35:03.018629] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6e80, cid 4, qid 0 00:28:28.486 [2024-07-11 21:35:03.018740] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.486 [2024-07-11 21:35:03.022764] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.486 [2024-07-11 21:35:03.022777] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022784] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb4ffe0): datao=0, datal=4096, cccid=4 00:28:28.486 [2024-07-11 21:35:03.022792] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb6e80) on tqpair(0xb4ffe0): expected_datao=0, payload_size=4096 00:28:28.486 [2024-07-11 21:35:03.022800] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022810] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022817] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022830] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.486 [2024-07-11 21:35:03.022840] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.486 [2024-07-11 21:35:03.022846] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022853] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6e80) on tqpair=0xb4ffe0 00:28:28.486 [2024-07-11 21:35:03.022873] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:28:28.486 [2024-07-11 21:35:03.022916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022927] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.022943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.486 [2024-07-11 21:35:03.022956] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022963] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.022970] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xb4ffe0) 00:28:28.486 [2024-07-11 21:35:03.022979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.486 [2024-07-11 21:35:03.023006] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6e80, cid 4, qid 0 00:28:28.486 [2024-07-11 21:35:03.023018] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb7000, cid 5, qid 0 00:28:28.486 [2024-07-11 21:35:03.023165] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.486 [2024-07-11 21:35:03.023181] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.486 [2024-07-11 21:35:03.023188] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.023194] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb4ffe0): datao=0, datal=1024, cccid=4 00:28:28.486 [2024-07-11 21:35:03.023202] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb6e80) on tqpair(0xb4ffe0): expected_datao=0, payload_size=1024 00:28:28.486 [2024-07-11 21:35:03.023210] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.023219] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.023226] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.023235] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.486 [2024-07-11 21:35:03.023244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.486 [2024-07-11 21:35:03.023250] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.023256] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb7000) on tqpair=0xb4ffe0 00:28:28.486 [2024-07-11 21:35:03.063843] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.486 [2024-07-11 21:35:03.063862] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.486 [2024-07-11 21:35:03.063870] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.486 [2024-07-11 21:35:03.063877] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6e80) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.063897] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.063907] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb4ffe0) 00:28:28.487 [2024-07-11 21:35:03.063918] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.487 [2024-07-11 21:35:03.063948] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6e80, cid 4, qid 0 00:28:28.487 [2024-07-11 21:35:03.064067] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.487 [2024-07-11 21:35:03.064083] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.487 [2024-07-11 21:35:03.064090] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064097] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb4ffe0): datao=0, datal=3072, cccid=4 00:28:28.487 [2024-07-11 21:35:03.064104] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb6e80) on tqpair(0xb4ffe0): expected_datao=0, payload_size=3072 00:28:28.487 [2024-07-11 21:35:03.064112] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064122] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064129] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064141] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.487 [2024-07-11 21:35:03.064156] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.487 [2024-07-11 21:35:03.064163] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064170] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6e80) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.064186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064194] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xb4ffe0) 00:28:28.487 [2024-07-11 21:35:03.064205] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.487 [2024-07-11 21:35:03.064233] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6e80, cid 4, qid 0 00:28:28.487 [2024-07-11 21:35:03.064340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.487 [2024-07-11 21:35:03.064353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.487 [2024-07-11 21:35:03.064360] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064366] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xb4ffe0): datao=0, datal=8, cccid=4 00:28:28.487 [2024-07-11 21:35:03.064374] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xbb6e80) on tqpair(0xb4ffe0): expected_datao=0, payload_size=8 00:28:28.487 [2024-07-11 21:35:03.064381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064391] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.064398] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.107789] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.487 [2024-07-11 21:35:03.107807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.487 [2024-07-11 21:35:03.107814] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.107821] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6e80) on tqpair=0xb4ffe0 00:28:28.487 ===================================================== 00:28:28.487 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:28:28.487 ===================================================== 00:28:28.487 Controller Capabilities/Features 00:28:28.487 ================================ 00:28:28.487 Vendor ID: 0000 00:28:28.487 Subsystem Vendor ID: 0000 00:28:28.487 Serial Number: .................... 00:28:28.487 Model Number: ........................................ 00:28:28.487 Firmware Version: 24.09 00:28:28.487 Recommended Arb Burst: 0 00:28:28.487 IEEE OUI Identifier: 00 00 00 00:28:28.487 Multi-path I/O 00:28:28.487 May have multiple subsystem ports: No 00:28:28.487 May have multiple controllers: No 00:28:28.487 Associated with SR-IOV VF: No 00:28:28.487 Max Data Transfer Size: 131072 00:28:28.487 Max Number of Namespaces: 0 00:28:28.487 Max Number of I/O Queues: 1024 00:28:28.487 NVMe Specification Version (VS): 1.3 00:28:28.487 NVMe Specification Version (Identify): 1.3 00:28:28.487 Maximum Queue Entries: 128 00:28:28.487 Contiguous Queues Required: Yes 00:28:28.487 Arbitration Mechanisms Supported 00:28:28.487 Weighted Round Robin: Not Supported 00:28:28.487 Vendor Specific: Not Supported 00:28:28.487 Reset Timeout: 15000 ms 00:28:28.487 Doorbell Stride: 4 bytes 00:28:28.487 NVM Subsystem Reset: Not Supported 00:28:28.487 Command Sets Supported 00:28:28.487 NVM Command Set: Supported 00:28:28.487 Boot Partition: Not Supported 00:28:28.487 Memory Page Size Minimum: 4096 bytes 00:28:28.487 Memory Page Size Maximum: 4096 bytes 00:28:28.487 Persistent Memory Region: Not Supported 00:28:28.487 Optional Asynchronous Events Supported 00:28:28.487 Namespace Attribute Notices: Not Supported 00:28:28.487 Firmware Activation Notices: Not Supported 00:28:28.487 ANA Change Notices: Not Supported 00:28:28.487 PLE Aggregate Log Change Notices: Not Supported 00:28:28.487 LBA Status Info Alert Notices: Not Supported 00:28:28.487 EGE Aggregate Log Change Notices: Not Supported 00:28:28.487 Normal NVM Subsystem Shutdown event: Not Supported 00:28:28.487 Zone Descriptor Change Notices: Not Supported 00:28:28.487 Discovery Log Change Notices: Supported 00:28:28.487 Controller Attributes 00:28:28.487 128-bit Host Identifier: Not Supported 00:28:28.487 Non-Operational Permissive Mode: Not Supported 00:28:28.487 NVM Sets: Not Supported 00:28:28.487 Read Recovery Levels: Not Supported 00:28:28.487 Endurance Groups: Not Supported 00:28:28.487 Predictable Latency Mode: Not Supported 00:28:28.487 Traffic Based Keep ALive: Not Supported 00:28:28.487 Namespace Granularity: Not Supported 00:28:28.487 SQ Associations: Not Supported 00:28:28.487 UUID List: Not Supported 00:28:28.487 Multi-Domain Subsystem: Not Supported 00:28:28.487 Fixed Capacity Management: Not Supported 00:28:28.487 Variable Capacity Management: Not Supported 00:28:28.487 Delete Endurance Group: Not Supported 00:28:28.487 Delete NVM Set: Not Supported 00:28:28.487 Extended LBA Formats Supported: Not Supported 00:28:28.487 Flexible Data Placement Supported: Not Supported 00:28:28.487 00:28:28.487 Controller Memory Buffer Support 00:28:28.487 ================================ 00:28:28.487 Supported: No 00:28:28.487 00:28:28.487 Persistent Memory Region Support 00:28:28.487 ================================ 00:28:28.487 Supported: No 00:28:28.487 00:28:28.487 Admin Command Set Attributes 00:28:28.487 ============================ 00:28:28.487 Security Send/Receive: Not Supported 00:28:28.487 Format NVM: Not Supported 00:28:28.487 Firmware Activate/Download: Not Supported 00:28:28.487 Namespace Management: Not Supported 00:28:28.487 Device Self-Test: Not Supported 00:28:28.487 Directives: Not Supported 00:28:28.487 NVMe-MI: Not Supported 00:28:28.487 Virtualization Management: Not Supported 00:28:28.487 Doorbell Buffer Config: Not Supported 00:28:28.487 Get LBA Status Capability: Not Supported 00:28:28.487 Command & Feature Lockdown Capability: Not Supported 00:28:28.487 Abort Command Limit: 1 00:28:28.487 Async Event Request Limit: 4 00:28:28.487 Number of Firmware Slots: N/A 00:28:28.487 Firmware Slot 1 Read-Only: N/A 00:28:28.487 Firmware Activation Without Reset: N/A 00:28:28.487 Multiple Update Detection Support: N/A 00:28:28.487 Firmware Update Granularity: No Information Provided 00:28:28.487 Per-Namespace SMART Log: No 00:28:28.487 Asymmetric Namespace Access Log Page: Not Supported 00:28:28.487 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:28:28.487 Command Effects Log Page: Not Supported 00:28:28.487 Get Log Page Extended Data: Supported 00:28:28.487 Telemetry Log Pages: Not Supported 00:28:28.487 Persistent Event Log Pages: Not Supported 00:28:28.487 Supported Log Pages Log Page: May Support 00:28:28.487 Commands Supported & Effects Log Page: Not Supported 00:28:28.487 Feature Identifiers & Effects Log Page:May Support 00:28:28.487 NVMe-MI Commands & Effects Log Page: May Support 00:28:28.487 Data Area 4 for Telemetry Log: Not Supported 00:28:28.487 Error Log Page Entries Supported: 128 00:28:28.487 Keep Alive: Not Supported 00:28:28.487 00:28:28.487 NVM Command Set Attributes 00:28:28.487 ========================== 00:28:28.487 Submission Queue Entry Size 00:28:28.487 Max: 1 00:28:28.487 Min: 1 00:28:28.487 Completion Queue Entry Size 00:28:28.487 Max: 1 00:28:28.487 Min: 1 00:28:28.487 Number of Namespaces: 0 00:28:28.487 Compare Command: Not Supported 00:28:28.487 Write Uncorrectable Command: Not Supported 00:28:28.487 Dataset Management Command: Not Supported 00:28:28.487 Write Zeroes Command: Not Supported 00:28:28.487 Set Features Save Field: Not Supported 00:28:28.487 Reservations: Not Supported 00:28:28.487 Timestamp: Not Supported 00:28:28.487 Copy: Not Supported 00:28:28.487 Volatile Write Cache: Not Present 00:28:28.487 Atomic Write Unit (Normal): 1 00:28:28.487 Atomic Write Unit (PFail): 1 00:28:28.487 Atomic Compare & Write Unit: 1 00:28:28.487 Fused Compare & Write: Supported 00:28:28.487 Scatter-Gather List 00:28:28.487 SGL Command Set: Supported 00:28:28.487 SGL Keyed: Supported 00:28:28.487 SGL Bit Bucket Descriptor: Not Supported 00:28:28.487 SGL Metadata Pointer: Not Supported 00:28:28.487 Oversized SGL: Not Supported 00:28:28.487 SGL Metadata Address: Not Supported 00:28:28.487 SGL Offset: Supported 00:28:28.487 Transport SGL Data Block: Not Supported 00:28:28.487 Replay Protected Memory Block: Not Supported 00:28:28.487 00:28:28.487 Firmware Slot Information 00:28:28.487 ========================= 00:28:28.487 Active slot: 0 00:28:28.487 00:28:28.487 00:28:28.487 Error Log 00:28:28.487 ========= 00:28:28.487 00:28:28.487 Active Namespaces 00:28:28.487 ================= 00:28:28.487 Discovery Log Page 00:28:28.487 ================== 00:28:28.487 Generation Counter: 2 00:28:28.487 Number of Records: 2 00:28:28.487 Record Format: 0 00:28:28.487 00:28:28.487 Discovery Log Entry 0 00:28:28.487 ---------------------- 00:28:28.487 Transport Type: 3 (TCP) 00:28:28.487 Address Family: 1 (IPv4) 00:28:28.487 Subsystem Type: 3 (Current Discovery Subsystem) 00:28:28.487 Entry Flags: 00:28:28.487 Duplicate Returned Information: 1 00:28:28.487 Explicit Persistent Connection Support for Discovery: 1 00:28:28.487 Transport Requirements: 00:28:28.487 Secure Channel: Not Required 00:28:28.487 Port ID: 0 (0x0000) 00:28:28.487 Controller ID: 65535 (0xffff) 00:28:28.487 Admin Max SQ Size: 128 00:28:28.487 Transport Service Identifier: 4420 00:28:28.487 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:28:28.487 Transport Address: 10.0.0.2 00:28:28.487 Discovery Log Entry 1 00:28:28.487 ---------------------- 00:28:28.487 Transport Type: 3 (TCP) 00:28:28.487 Address Family: 1 (IPv4) 00:28:28.487 Subsystem Type: 2 (NVM Subsystem) 00:28:28.487 Entry Flags: 00:28:28.487 Duplicate Returned Information: 0 00:28:28.487 Explicit Persistent Connection Support for Discovery: 0 00:28:28.487 Transport Requirements: 00:28:28.487 Secure Channel: Not Required 00:28:28.487 Port ID: 0 (0x0000) 00:28:28.487 Controller ID: 65535 (0xffff) 00:28:28.487 Admin Max SQ Size: 128 00:28:28.487 Transport Service Identifier: 4420 00:28:28.487 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:28:28.487 Transport Address: 10.0.0.2 [2024-07-11 21:35:03.107946] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:28:28.487 [2024-07-11 21:35:03.107970] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6880) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.107983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.487 [2024-07-11 21:35:03.107992] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6a00) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.108000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.487 [2024-07-11 21:35:03.108008] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6b80) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.108016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.487 [2024-07-11 21:35:03.108024] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.108031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.487 [2024-07-11 21:35:03.108050] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108059] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108066] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.487 [2024-07-11 21:35:03.108077] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.487 [2024-07-11 21:35:03.108102] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.487 [2024-07-11 21:35:03.108197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.487 [2024-07-11 21:35:03.108213] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.487 [2024-07-11 21:35:03.108223] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108231] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.108244] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108252] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108259] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.487 [2024-07-11 21:35:03.108269] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.487 [2024-07-11 21:35:03.108296] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.487 [2024-07-11 21:35:03.108412] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.487 [2024-07-11 21:35:03.108424] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.487 [2024-07-11 21:35:03.108430] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108437] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.108448] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:28:28.487 [2024-07-11 21:35:03.108458] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:28:28.487 [2024-07-11 21:35:03.108473] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108482] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.487 [2024-07-11 21:35:03.108499] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.487 [2024-07-11 21:35:03.108519] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.487 [2024-07-11 21:35:03.108615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.487 [2024-07-11 21:35:03.108631] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.487 [2024-07-11 21:35:03.108637] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108644] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.487 [2024-07-11 21:35:03.108662] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.487 [2024-07-11 21:35:03.108678] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.108688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.108709] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.108805] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.108821] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.108828] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.108835] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.108851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.108860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.108867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.108877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.108898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.108988] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.109000] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.109007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109013] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.109029] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109045] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.109055] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.109076] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.109166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.109178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.109184] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109191] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.109206] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109215] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109222] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.109232] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.109252] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.109344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.109360] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.109366] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109373] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.109390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109399] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109406] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.109416] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.109436] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.109527] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.109539] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.109545] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109552] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.109568] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109577] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109583] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.109594] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.109614] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.109706] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.109726] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.109734] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109741] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.109764] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109775] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109781] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.109792] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.109813] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.109905] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.109916] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.109923] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109930] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.109945] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.109961] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.109971] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.109991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.110082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.110093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.110100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110107] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.110122] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110131] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110138] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.110148] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.110168] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.110260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.110276] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.110282] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110289] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.110306] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110315] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110321] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.110332] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.110352] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.110444] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.110459] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.110470] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110477] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.110493] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110509] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.110519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.110540] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.110626] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.110638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.110645] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110652] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.110667] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110677] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.110693] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.110713] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.110808] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.110822] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.110829] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110836] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.110851] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110860] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.110867] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.110877] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.110898] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.110994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.111009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.111016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.111039] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111048] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.111065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.111086] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.111176] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.111187] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.111194] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111201] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.111221] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111230] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111237] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.111247] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.111268] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.111357] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.111369] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.111375] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111382] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.111397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111406] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111413] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.111423] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.111443] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.111535] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.111550] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.111557] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111564] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.111580] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111589] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111596] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.111606] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.111627] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.111718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.111733] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.111739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.111746] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.115775] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.115788] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.115795] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xb4ffe0) 00:28:28.488 [2024-07-11 21:35:03.115806] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.488 [2024-07-11 21:35:03.115828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xbb6d00, cid 3, qid 0 00:28:28.488 [2024-07-11 21:35:03.115931] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.488 [2024-07-11 21:35:03.115943] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.488 [2024-07-11 21:35:03.115950] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.488 [2024-07-11 21:35:03.115957] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xbb6d00) on tqpair=0xb4ffe0 00:28:28.488 [2024-07-11 21:35:03.115971] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:28:28.488 00:28:28.488 21:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:28:28.488 [2024-07-11 21:35:03.150767] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:28.488 [2024-07-11 21:35:03.150839] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1000645 ] 00:28:28.488 EAL: No free 2048 kB hugepages reported on node 1 00:28:28.488 [2024-07-11 21:35:03.187192] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:28:28.488 [2024-07-11 21:35:03.187256] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:28:28.488 [2024-07-11 21:35:03.187266] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:28:28.488 [2024-07-11 21:35:03.187285] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:28:28.488 [2024-07-11 21:35:03.187296] sock.c: 337:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:28:28.488 [2024-07-11 21:35:03.187519] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:28:28.488 [2024-07-11 21:35:03.187563] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x867fe0 0 00:28:28.489 [2024-07-11 21:35:03.194038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:28:28.489 [2024-07-11 21:35:03.194058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:28:28.489 [2024-07-11 21:35:03.194067] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:28:28.489 [2024-07-11 21:35:03.194073] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:28:28.489 [2024-07-11 21:35:03.194114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.194126] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.194133] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.194149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:28.489 [2024-07-11 21:35:03.194175] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.201773] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.201792] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.201800] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.201808] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.201827] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:28:28.489 [2024-07-11 21:35:03.201839] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:28:28.489 [2024-07-11 21:35:03.201848] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:28:28.489 [2024-07-11 21:35:03.201867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.201876] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.201883] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.201894] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.201923] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.202038] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.202053] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.202061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.202076] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:28:28.489 [2024-07-11 21:35:03.202090] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:28:28.489 [2024-07-11 21:35:03.202103] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202117] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.202128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.202150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.202246] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.202259] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.202266] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202273] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.202282] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:28:28.489 [2024-07-11 21:35:03.202296] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:28:28.489 [2024-07-11 21:35:03.202309] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202316] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202323] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.202333] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.202353] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.202448] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.202460] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.202467] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202475] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.202483] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:28:28.489 [2024-07-11 21:35:03.202500] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202509] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202516] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.202526] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.202546] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.202643] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.202662] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.202670] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202677] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.202686] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:28:28.489 [2024-07-11 21:35:03.202694] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:28:28.489 [2024-07-11 21:35:03.202708] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:28:28.489 [2024-07-11 21:35:03.202818] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:28:28.489 [2024-07-11 21:35:03.202828] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:28:28.489 [2024-07-11 21:35:03.202842] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202850] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.202856] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.202867] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.202889] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.202991] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.203006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.203013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.203028] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:28:28.489 [2024-07-11 21:35:03.203045] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203054] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203061] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.203071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.203092] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.203190] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.203205] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.203212] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203219] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.203227] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:28:28.489 [2024-07-11 21:35:03.203236] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:28:28.489 [2024-07-11 21:35:03.203250] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:28:28.489 [2024-07-11 21:35:03.203265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:28:28.489 [2024-07-11 21:35:03.203279] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203287] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.203300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.203322] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.203468] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.489 [2024-07-11 21:35:03.203484] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.489 [2024-07-11 21:35:03.203491] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203498] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=4096, cccid=0 00:28:28.489 [2024-07-11 21:35:03.203506] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8ce880) on tqpair(0x867fe0): expected_datao=0, payload_size=4096 00:28:28.489 [2024-07-11 21:35:03.203514] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203524] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203532] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203574] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.203585] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.203592] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203599] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.203611] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:28:28.489 [2024-07-11 21:35:03.203624] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:28:28.489 [2024-07-11 21:35:03.203633] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:28:28.489 [2024-07-11 21:35:03.203641] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:28:28.489 [2024-07-11 21:35:03.203649] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:28:28.489 [2024-07-11 21:35:03.203657] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:28:28.489 [2024-07-11 21:35:03.203672] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:28:28.489 [2024-07-11 21:35:03.203684] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203692] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203699] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.203710] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:28.489 [2024-07-11 21:35:03.203731] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.203879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.489 [2024-07-11 21:35:03.203893] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.489 [2024-07-11 21:35:03.203900] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203907] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.489 [2024-07-11 21:35:03.203918] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203926] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203933] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.203943] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.489 [2024-07-11 21:35:03.203957] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203964] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203971] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.203980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.489 [2024-07-11 21:35:03.203989] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.203996] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.204003] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.204011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.489 [2024-07-11 21:35:03.204021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.204028] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.204034] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.204043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.489 [2024-07-11 21:35:03.204052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:28:28.489 [2024-07-11 21:35:03.204070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:28:28.489 [2024-07-11 21:35:03.204083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.489 [2024-07-11 21:35:03.204091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.489 [2024-07-11 21:35:03.204101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.489 [2024-07-11 21:35:03.204139] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ce880, cid 0, qid 0 00:28:28.489 [2024-07-11 21:35:03.204151] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cea00, cid 1, qid 0 00:28:28.489 [2024-07-11 21:35:03.204159] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ceb80, cid 2, qid 0 00:28:28.489 [2024-07-11 21:35:03.204166] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.490 [2024-07-11 21:35:03.204174] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.204384] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.204401] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.204408] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.204415] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.204424] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:28:28.490 [2024-07-11 21:35:03.204433] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.204448] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.204460] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.204471] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.204478] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.204489] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.204500] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:28:28.490 [2024-07-11 21:35:03.204521] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.204618] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.204633] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.204640] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.204647] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.204712] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.204730] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.204746] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.204762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.204773] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.204796] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.204950] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.204965] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.204972] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.204979] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=4096, cccid=4 00:28:28.490 [2024-07-11 21:35:03.204987] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cee80) on tqpair(0x867fe0): expected_datao=0, payload_size=4096 00:28:28.490 [2024-07-11 21:35:03.204995] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205004] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205012] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205050] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.205064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.205071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205078] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.205095] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:28:28.490 [2024-07-11 21:35:03.205113] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.205131] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.205145] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205153] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.205163] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.205185] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.205340] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.205353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.205365] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205373] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=4096, cccid=4 00:28:28.490 [2024-07-11 21:35:03.205381] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cee80) on tqpair(0x867fe0): expected_datao=0, payload_size=4096 00:28:28.490 [2024-07-11 21:35:03.205388] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205398] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205406] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205418] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.205427] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.205434] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205441] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.205463] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.205482] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.205496] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205504] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.205514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.205535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.205688] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.205700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.205707] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205714] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=4096, cccid=4 00:28:28.490 [2024-07-11 21:35:03.205721] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cee80) on tqpair(0x867fe0): expected_datao=0, payload_size=4096 00:28:28.490 [2024-07-11 21:35:03.205729] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205739] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.205747] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.209772] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.209787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.209794] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.209802] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.209815] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209832] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209847] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209860] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209890] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:28:28.490 [2024-07-11 21:35:03.209898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:28:28.490 [2024-07-11 21:35:03.209907] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:28:28.490 [2024-07-11 21:35:03.209927] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.209936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.209946] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.209958] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.209965] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.209972] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.209981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:28:28.490 [2024-07-11 21:35:03.210007] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.210020] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf000, cid 5, qid 0 00:28:28.490 [2024-07-11 21:35:03.210166] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.210178] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.210185] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210192] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.210202] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.210212] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.210219] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210226] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf000) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.210241] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210260] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.210281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf000, cid 5, qid 0 00:28:28.490 [2024-07-11 21:35:03.210430] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.210445] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.210452] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210458] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf000) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.210474] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210484] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.210515] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf000, cid 5, qid 0 00:28:28.490 [2024-07-11 21:35:03.210632] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.210648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.210658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf000) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.210681] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210691] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210701] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.210722] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf000, cid 5, qid 0 00:28:28.490 [2024-07-11 21:35:03.210828] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.210844] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.210851] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210858] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf000) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.210882] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210893] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210904] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.210916] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210924] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.210945] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210953] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210962] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.210974] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.210982] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x867fe0) 00:28:28.490 [2024-07-11 21:35:03.210991] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.490 [2024-07-11 21:35:03.211013] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf000, cid 5, qid 0 00:28:28.490 [2024-07-11 21:35:03.211025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cee80, cid 4, qid 0 00:28:28.490 [2024-07-11 21:35:03.211033] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf180, cid 6, qid 0 00:28:28.490 [2024-07-11 21:35:03.211040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf300, cid 7, qid 0 00:28:28.490 [2024-07-11 21:35:03.211232] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.211248] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.211255] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211261] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=8192, cccid=5 00:28:28.490 [2024-07-11 21:35:03.211269] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cf000) on tqpair(0x867fe0): expected_datao=0, payload_size=8192 00:28:28.490 [2024-07-11 21:35:03.211277] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211311] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211326] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.211344] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.211351] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211357] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=512, cccid=4 00:28:28.490 [2024-07-11 21:35:03.211365] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cee80) on tqpair(0x867fe0): expected_datao=0, payload_size=512 00:28:28.490 [2024-07-11 21:35:03.211372] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211382] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211389] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211397] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.211406] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.211413] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211419] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=512, cccid=6 00:28:28.490 [2024-07-11 21:35:03.211427] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cf180) on tqpair(0x867fe0): expected_datao=0, payload_size=512 00:28:28.490 [2024-07-11 21:35:03.211434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211443] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211450] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:28:28.490 [2024-07-11 21:35:03.211468] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:28:28.490 [2024-07-11 21:35:03.211474] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211481] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x867fe0): datao=0, datal=4096, cccid=7 00:28:28.490 [2024-07-11 21:35:03.211488] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x8cf300) on tqpair(0x867fe0): expected_datao=0, payload_size=4096 00:28:28.490 [2024-07-11 21:35:03.211496] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211506] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211513] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211525] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.211534] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.211541] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211547] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf000) on tqpair=0x867fe0 00:28:28.490 [2024-07-11 21:35:03.211566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.490 [2024-07-11 21:35:03.211577] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.490 [2024-07-11 21:35:03.211583] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.490 [2024-07-11 21:35:03.211590] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cee80) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.211605] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.211615] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.491 [2024-07-11 21:35:03.211622] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.211628] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf180) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.211639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.211648] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.491 [2024-07-11 21:35:03.211658] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.211665] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf300) on tqpair=0x867fe0 00:28:28.491 ===================================================== 00:28:28.491 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:28.491 ===================================================== 00:28:28.491 Controller Capabilities/Features 00:28:28.491 ================================ 00:28:28.491 Vendor ID: 8086 00:28:28.491 Subsystem Vendor ID: 8086 00:28:28.491 Serial Number: SPDK00000000000001 00:28:28.491 Model Number: SPDK bdev Controller 00:28:28.491 Firmware Version: 24.09 00:28:28.491 Recommended Arb Burst: 6 00:28:28.491 IEEE OUI Identifier: e4 d2 5c 00:28:28.491 Multi-path I/O 00:28:28.491 May have multiple subsystem ports: Yes 00:28:28.491 May have multiple controllers: Yes 00:28:28.491 Associated with SR-IOV VF: No 00:28:28.491 Max Data Transfer Size: 131072 00:28:28.491 Max Number of Namespaces: 32 00:28:28.491 Max Number of I/O Queues: 127 00:28:28.491 NVMe Specification Version (VS): 1.3 00:28:28.491 NVMe Specification Version (Identify): 1.3 00:28:28.491 Maximum Queue Entries: 128 00:28:28.491 Contiguous Queues Required: Yes 00:28:28.491 Arbitration Mechanisms Supported 00:28:28.491 Weighted Round Robin: Not Supported 00:28:28.491 Vendor Specific: Not Supported 00:28:28.491 Reset Timeout: 15000 ms 00:28:28.491 Doorbell Stride: 4 bytes 00:28:28.491 NVM Subsystem Reset: Not Supported 00:28:28.491 Command Sets Supported 00:28:28.491 NVM Command Set: Supported 00:28:28.491 Boot Partition: Not Supported 00:28:28.491 Memory Page Size Minimum: 4096 bytes 00:28:28.491 Memory Page Size Maximum: 4096 bytes 00:28:28.491 Persistent Memory Region: Not Supported 00:28:28.491 Optional Asynchronous Events Supported 00:28:28.491 Namespace Attribute Notices: Supported 00:28:28.491 Firmware Activation Notices: Not Supported 00:28:28.491 ANA Change Notices: Not Supported 00:28:28.491 PLE Aggregate Log Change Notices: Not Supported 00:28:28.491 LBA Status Info Alert Notices: Not Supported 00:28:28.491 EGE Aggregate Log Change Notices: Not Supported 00:28:28.491 Normal NVM Subsystem Shutdown event: Not Supported 00:28:28.491 Zone Descriptor Change Notices: Not Supported 00:28:28.491 Discovery Log Change Notices: Not Supported 00:28:28.491 Controller Attributes 00:28:28.491 128-bit Host Identifier: Supported 00:28:28.491 Non-Operational Permissive Mode: Not Supported 00:28:28.491 NVM Sets: Not Supported 00:28:28.491 Read Recovery Levels: Not Supported 00:28:28.491 Endurance Groups: Not Supported 00:28:28.491 Predictable Latency Mode: Not Supported 00:28:28.491 Traffic Based Keep ALive: Not Supported 00:28:28.491 Namespace Granularity: Not Supported 00:28:28.491 SQ Associations: Not Supported 00:28:28.491 UUID List: Not Supported 00:28:28.491 Multi-Domain Subsystem: Not Supported 00:28:28.491 Fixed Capacity Management: Not Supported 00:28:28.491 Variable Capacity Management: Not Supported 00:28:28.491 Delete Endurance Group: Not Supported 00:28:28.491 Delete NVM Set: Not Supported 00:28:28.491 Extended LBA Formats Supported: Not Supported 00:28:28.491 Flexible Data Placement Supported: Not Supported 00:28:28.491 00:28:28.491 Controller Memory Buffer Support 00:28:28.491 ================================ 00:28:28.491 Supported: No 00:28:28.491 00:28:28.491 Persistent Memory Region Support 00:28:28.491 ================================ 00:28:28.491 Supported: No 00:28:28.491 00:28:28.491 Admin Command Set Attributes 00:28:28.491 ============================ 00:28:28.491 Security Send/Receive: Not Supported 00:28:28.491 Format NVM: Not Supported 00:28:28.491 Firmware Activate/Download: Not Supported 00:28:28.491 Namespace Management: Not Supported 00:28:28.491 Device Self-Test: Not Supported 00:28:28.491 Directives: Not Supported 00:28:28.491 NVMe-MI: Not Supported 00:28:28.491 Virtualization Management: Not Supported 00:28:28.491 Doorbell Buffer Config: Not Supported 00:28:28.491 Get LBA Status Capability: Not Supported 00:28:28.491 Command & Feature Lockdown Capability: Not Supported 00:28:28.491 Abort Command Limit: 4 00:28:28.491 Async Event Request Limit: 4 00:28:28.491 Number of Firmware Slots: N/A 00:28:28.491 Firmware Slot 1 Read-Only: N/A 00:28:28.491 Firmware Activation Without Reset: N/A 00:28:28.491 Multiple Update Detection Support: N/A 00:28:28.491 Firmware Update Granularity: No Information Provided 00:28:28.491 Per-Namespace SMART Log: No 00:28:28.491 Asymmetric Namespace Access Log Page: Not Supported 00:28:28.491 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:28:28.491 Command Effects Log Page: Supported 00:28:28.491 Get Log Page Extended Data: Supported 00:28:28.491 Telemetry Log Pages: Not Supported 00:28:28.491 Persistent Event Log Pages: Not Supported 00:28:28.491 Supported Log Pages Log Page: May Support 00:28:28.491 Commands Supported & Effects Log Page: Not Supported 00:28:28.491 Feature Identifiers & Effects Log Page:May Support 00:28:28.491 NVMe-MI Commands & Effects Log Page: May Support 00:28:28.491 Data Area 4 for Telemetry Log: Not Supported 00:28:28.491 Error Log Page Entries Supported: 128 00:28:28.491 Keep Alive: Supported 00:28:28.491 Keep Alive Granularity: 10000 ms 00:28:28.491 00:28:28.491 NVM Command Set Attributes 00:28:28.491 ========================== 00:28:28.491 Submission Queue Entry Size 00:28:28.491 Max: 64 00:28:28.491 Min: 64 00:28:28.491 Completion Queue Entry Size 00:28:28.491 Max: 16 00:28:28.491 Min: 16 00:28:28.491 Number of Namespaces: 32 00:28:28.491 Compare Command: Supported 00:28:28.491 Write Uncorrectable Command: Not Supported 00:28:28.491 Dataset Management Command: Supported 00:28:28.491 Write Zeroes Command: Supported 00:28:28.491 Set Features Save Field: Not Supported 00:28:28.491 Reservations: Supported 00:28:28.491 Timestamp: Not Supported 00:28:28.491 Copy: Supported 00:28:28.491 Volatile Write Cache: Present 00:28:28.491 Atomic Write Unit (Normal): 1 00:28:28.491 Atomic Write Unit (PFail): 1 00:28:28.491 Atomic Compare & Write Unit: 1 00:28:28.491 Fused Compare & Write: Supported 00:28:28.491 Scatter-Gather List 00:28:28.491 SGL Command Set: Supported 00:28:28.491 SGL Keyed: Supported 00:28:28.491 SGL Bit Bucket Descriptor: Not Supported 00:28:28.491 SGL Metadata Pointer: Not Supported 00:28:28.491 Oversized SGL: Not Supported 00:28:28.491 SGL Metadata Address: Not Supported 00:28:28.491 SGL Offset: Supported 00:28:28.491 Transport SGL Data Block: Not Supported 00:28:28.491 Replay Protected Memory Block: Not Supported 00:28:28.491 00:28:28.491 Firmware Slot Information 00:28:28.491 ========================= 00:28:28.491 Active slot: 1 00:28:28.491 Slot 1 Firmware Revision: 24.09 00:28:28.491 00:28:28.491 00:28:28.491 Commands Supported and Effects 00:28:28.491 ============================== 00:28:28.491 Admin Commands 00:28:28.491 -------------- 00:28:28.491 Get Log Page (02h): Supported 00:28:28.491 Identify (06h): Supported 00:28:28.491 Abort (08h): Supported 00:28:28.491 Set Features (09h): Supported 00:28:28.491 Get Features (0Ah): Supported 00:28:28.491 Asynchronous Event Request (0Ch): Supported 00:28:28.491 Keep Alive (18h): Supported 00:28:28.491 I/O Commands 00:28:28.491 ------------ 00:28:28.491 Flush (00h): Supported LBA-Change 00:28:28.491 Write (01h): Supported LBA-Change 00:28:28.491 Read (02h): Supported 00:28:28.491 Compare (05h): Supported 00:28:28.491 Write Zeroes (08h): Supported LBA-Change 00:28:28.491 Dataset Management (09h): Supported LBA-Change 00:28:28.491 Copy (19h): Supported LBA-Change 00:28:28.491 00:28:28.491 Error Log 00:28:28.491 ========= 00:28:28.491 00:28:28.491 Arbitration 00:28:28.491 =========== 00:28:28.491 Arbitration Burst: 1 00:28:28.491 00:28:28.491 Power Management 00:28:28.491 ================ 00:28:28.491 Number of Power States: 1 00:28:28.491 Current Power State: Power State #0 00:28:28.491 Power State #0: 00:28:28.491 Max Power: 0.00 W 00:28:28.491 Non-Operational State: Operational 00:28:28.491 Entry Latency: Not Reported 00:28:28.491 Exit Latency: Not Reported 00:28:28.491 Relative Read Throughput: 0 00:28:28.491 Relative Read Latency: 0 00:28:28.491 Relative Write Throughput: 0 00:28:28.491 Relative Write Latency: 0 00:28:28.491 Idle Power: Not Reported 00:28:28.491 Active Power: Not Reported 00:28:28.491 Non-Operational Permissive Mode: Not Supported 00:28:28.491 00:28:28.491 Health Information 00:28:28.491 ================== 00:28:28.491 Critical Warnings: 00:28:28.491 Available Spare Space: OK 00:28:28.491 Temperature: OK 00:28:28.491 Device Reliability: OK 00:28:28.491 Read Only: No 00:28:28.491 Volatile Memory Backup: OK 00:28:28.491 Current Temperature: 0 Kelvin (-273 Celsius) 00:28:28.491 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:28:28.491 Available Spare: 0% 00:28:28.491 Available Spare Threshold: 0% 00:28:28.491 Life Percentage Used:[2024-07-11 21:35:03.211805] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.211817] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x867fe0) 00:28:28.491 [2024-07-11 21:35:03.211828] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.491 [2024-07-11 21:35:03.211851] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8cf300, cid 7, qid 0 00:28:28.491 [2024-07-11 21:35:03.211968] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.211983] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.491 [2024-07-11 21:35:03.211990] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.211997] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cf300) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212047] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:28:28.491 [2024-07-11 21:35:03.212067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ce880) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.491 [2024-07-11 21:35:03.212087] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8cea00) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.491 [2024-07-11 21:35:03.212104] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ceb80) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.491 [2024-07-11 21:35:03.212120] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:28.491 [2024-07-11 21:35:03.212140] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212148] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212155] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.491 [2024-07-11 21:35:03.212166] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.491 [2024-07-11 21:35:03.212188] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.491 [2024-07-11 21:35:03.212335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.212351] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.491 [2024-07-11 21:35:03.212358] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212365] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212384] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212391] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.491 [2024-07-11 21:35:03.212401] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.491 [2024-07-11 21:35:03.212427] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.491 [2024-07-11 21:35:03.212547] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.212563] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.491 [2024-07-11 21:35:03.212571] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212578] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212586] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:28:28.491 [2024-07-11 21:35:03.212594] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:28:28.491 [2024-07-11 21:35:03.212610] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212619] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212626] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.491 [2024-07-11 21:35:03.212636] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.491 [2024-07-11 21:35:03.212656] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.491 [2024-07-11 21:35:03.212803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.212817] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.491 [2024-07-11 21:35:03.212824] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212831] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.491 [2024-07-11 21:35:03.212847] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212857] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.491 [2024-07-11 21:35:03.212863] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.491 [2024-07-11 21:35:03.212874] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.491 [2024-07-11 21:35:03.212894] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.491 [2024-07-11 21:35:03.212994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.491 [2024-07-11 21:35:03.213009] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.492 [2024-07-11 21:35:03.213016] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213023] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.492 [2024-07-11 21:35:03.213040] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213049] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213056] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.492 [2024-07-11 21:35:03.213067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.492 [2024-07-11 21:35:03.213087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.492 [2024-07-11 21:35:03.213192] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.492 [2024-07-11 21:35:03.213204] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.492 [2024-07-11 21:35:03.213211] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213218] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.492 [2024-07-11 21:35:03.213234] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213244] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213250] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.492 [2024-07-11 21:35:03.213261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.492 [2024-07-11 21:35:03.213281] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.492 [2024-07-11 21:35:03.213380] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.492 [2024-07-11 21:35:03.213396] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.492 [2024-07-11 21:35:03.213403] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213410] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.492 [2024-07-11 21:35:03.213426] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213436] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.492 [2024-07-11 21:35:03.213442] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.492 [2024-07-11 21:35:03.213452] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.492 [2024-07-11 21:35:03.213473] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.492 [2024-07-11 21:35:03.213581] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.493 [2024-07-11 21:35:03.213593] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.493 [2024-07-11 21:35:03.213600] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.213607] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.493 [2024-07-11 21:35:03.213623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.213632] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.213639] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.493 [2024-07-11 21:35:03.213649] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.493 [2024-07-11 21:35:03.213669] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.493 [2024-07-11 21:35:03.217767] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.493 [2024-07-11 21:35:03.217785] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.493 [2024-07-11 21:35:03.217792] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.217799] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.493 [2024-07-11 21:35:03.217818] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.217828] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.217835] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x867fe0) 00:28:28.493 [2024-07-11 21:35:03.217845] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:28.493 [2024-07-11 21:35:03.217867] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x8ced00, cid 3, qid 0 00:28:28.493 [2024-07-11 21:35:03.218015] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:28:28.493 [2024-07-11 21:35:03.218027] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:28:28.493 [2024-07-11 21:35:03.218034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:28:28.493 [2024-07-11 21:35:03.218041] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x8ced00) on tqpair=0x867fe0 00:28:28.493 [2024-07-11 21:35:03.218054] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 5 milliseconds 00:28:28.493 0% 00:28:28.493 Data Units Read: 0 00:28:28.493 Data Units Written: 0 00:28:28.493 Host Read Commands: 0 00:28:28.493 Host Write Commands: 0 00:28:28.493 Controller Busy Time: 0 minutes 00:28:28.493 Power Cycles: 0 00:28:28.493 Power On Hours: 0 hours 00:28:28.493 Unsafe Shutdowns: 0 00:28:28.493 Unrecoverable Media Errors: 0 00:28:28.493 Lifetime Error Log Entries: 0 00:28:28.493 Warning Temperature Time: 0 minutes 00:28:28.493 Critical Temperature Time: 0 minutes 00:28:28.493 00:28:28.493 Number of Queues 00:28:28.493 ================ 00:28:28.493 Number of I/O Submission Queues: 127 00:28:28.493 Number of I/O Completion Queues: 127 00:28:28.493 00:28:28.493 Active Namespaces 00:28:28.493 ================= 00:28:28.493 Namespace ID:1 00:28:28.493 Error Recovery Timeout: Unlimited 00:28:28.493 Command Set Identifier: NVM (00h) 00:28:28.493 Deallocate: Supported 00:28:28.493 Deallocated/Unwritten Error: Not Supported 00:28:28.493 Deallocated Read Value: Unknown 00:28:28.493 Deallocate in Write Zeroes: Not Supported 00:28:28.493 Deallocated Guard Field: 0xFFFF 00:28:28.493 Flush: Supported 00:28:28.493 Reservation: Supported 00:28:28.493 Namespace Sharing Capabilities: Multiple Controllers 00:28:28.493 Size (in LBAs): 131072 (0GiB) 00:28:28.493 Capacity (in LBAs): 131072 (0GiB) 00:28:28.493 Utilization (in LBAs): 131072 (0GiB) 00:28:28.493 NGUID: ABCDEF0123456789ABCDEF0123456789 00:28:28.493 EUI64: ABCDEF0123456789 00:28:28.493 UUID: 99a11396-0341-4a9e-ae15-54f95c2a7670 00:28:28.493 Thin Provisioning: Not Supported 00:28:28.493 Per-NS Atomic Units: Yes 00:28:28.493 Atomic Boundary Size (Normal): 0 00:28:28.493 Atomic Boundary Size (PFail): 0 00:28:28.493 Atomic Boundary Offset: 0 00:28:28.493 Maximum Single Source Range Length: 65535 00:28:28.493 Maximum Copy Length: 65535 00:28:28.493 Maximum Source Range Count: 1 00:28:28.493 NGUID/EUI64 Never Reused: No 00:28:28.493 Namespace Write Protected: No 00:28:28.493 Number of LBA Formats: 1 00:28:28.493 Current LBA Format: LBA Format #00 00:28:28.493 LBA Format #00: Data Size: 512 Metadata Size: 0 00:28:28.493 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@51 -- # sync 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:28.493 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:28.749 rmmod nvme_tcp 00:28:28.749 rmmod nvme_fabrics 00:28:28.749 rmmod nvme_keyring 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 1000506 ']' 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 1000506 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 1000506 ']' 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 1000506 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1000506 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1000506' 00:28:28.749 killing process with pid 1000506 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@967 -- # kill 1000506 00:28:28.749 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@972 -- # wait 1000506 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:29.007 21:35:03 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:30.946 21:35:05 nvmf_tcp.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:30.946 00:28:30.946 real 0m5.510s 00:28:30.946 user 0m4.420s 00:28:30.946 sys 0m1.909s 00:28:30.946 21:35:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:30.946 21:35:05 nvmf_tcp.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:28:30.946 ************************************ 00:28:30.946 END TEST nvmf_identify 00:28:30.946 ************************************ 00:28:30.946 21:35:05 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:28:30.946 21:35:05 nvmf_tcp -- nvmf/nvmf.sh@98 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:30.946 21:35:05 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:28:30.946 21:35:05 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:30.946 21:35:05 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:30.946 ************************************ 00:28:30.946 START TEST nvmf_perf 00:28:30.946 ************************************ 00:28:30.946 21:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:28:31.204 * Looking for test storage... 00:28:31.204 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:28:31.204 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:28:31.205 21:35:05 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:28:33.106 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:28:33.106 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:28:33.106 Found net devices under 0000:0a:00.0: cvl_0_0 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:28:33.106 Found net devices under 0000:0a:00.1: cvl_0_1 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:33.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:33.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:28:33.106 00:28:33.106 --- 10.0.0.2 ping statistics --- 00:28:33.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.106 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:33.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:33.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:28:33.106 00:28:33.106 --- 10.0.0.1 ping statistics --- 00:28:33.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:33.106 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:28:33.106 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=1002575 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 1002575 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 1002575 ']' 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:33.107 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:33.107 21:35:07 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.107 [2024-07-11 21:35:07.857986] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:28:33.107 [2024-07-11 21:35:07.858079] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:33.365 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.365 [2024-07-11 21:35:07.923631] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:33.365 [2024-07-11 21:35:08.013788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:33.365 [2024-07-11 21:35:08.013852] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:33.365 [2024-07-11 21:35:08.013866] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:33.365 [2024-07-11 21:35:08.013877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:33.365 [2024-07-11 21:35:08.013887] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:33.365 [2024-07-11 21:35:08.013937] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:33.365 [2024-07-11 21:35:08.013997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:33.365 [2024-07-11 21:35:08.014063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:33.365 [2024-07-11 21:35:08.014066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:28:33.621 21:35:08 nvmf_tcp.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:28:36.898 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:28:36.898 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:28:36.898 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:28:36.898 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:28:37.156 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:28:37.156 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:28:37.156 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:28:37.156 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:28:37.156 21:35:11 nvmf_tcp.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:28:37.413 [2024-07-11 21:35:12.040063] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:37.413 21:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:37.670 21:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:37.670 21:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:37.927 21:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:28:37.927 21:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:38.184 21:35:12 nvmf_tcp.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:38.440 [2024-07-11 21:35:13.047772] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:38.440 21:35:13 nvmf_tcp.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:38.697 21:35:13 nvmf_tcp.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:28:38.697 21:35:13 nvmf_tcp.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:38.697 21:35:13 nvmf_tcp.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:28:38.697 21:35:13 nvmf_tcp.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:28:40.067 Initializing NVMe Controllers 00:28:40.068 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:28:40.068 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:28:40.068 Initialization complete. Launching workers. 00:28:40.068 ======================================================== 00:28:40.068 Latency(us) 00:28:40.068 Device Information : IOPS MiB/s Average min max 00:28:40.068 PCIE (0000:88:00.0) NSID 1 from core 0: 84408.44 329.72 378.73 33.15 6247.44 00:28:40.068 ======================================================== 00:28:40.068 Total : 84408.44 329.72 378.73 33.15 6247.44 00:28:40.068 00:28:40.068 21:35:14 nvmf_tcp.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:40.068 EAL: No free 2048 kB hugepages reported on node 1 00:28:41.438 Initializing NVMe Controllers 00:28:41.438 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:41.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:41.438 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:41.438 Initialization complete. Launching workers. 00:28:41.438 ======================================================== 00:28:41.439 Latency(us) 00:28:41.439 Device Information : IOPS MiB/s Average min max 00:28:41.439 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 77.00 0.30 13459.93 159.07 45809.74 00:28:41.439 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 76.00 0.30 13216.86 5986.36 47899.85 00:28:41.439 ======================================================== 00:28:41.439 Total : 153.00 0.60 13339.19 159.07 47899.85 00:28:41.439 00:28:41.439 21:35:16 nvmf_tcp.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:41.439 EAL: No free 2048 kB hugepages reported on node 1 00:28:42.811 Initializing NVMe Controllers 00:28:42.812 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:42.812 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:42.812 Initialization complete. Launching workers. 00:28:42.812 ======================================================== 00:28:42.812 Latency(us) 00:28:42.812 Device Information : IOPS MiB/s Average min max 00:28:42.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8187.51 31.98 3909.20 670.15 11120.14 00:28:42.812 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3800.49 14.85 8421.55 4629.47 18169.32 00:28:42.812 ======================================================== 00:28:42.812 Total : 11987.99 46.83 5339.73 670.15 18169.32 00:28:42.812 00:28:42.812 21:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:28:42.812 21:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:28:42.812 21:35:17 nvmf_tcp.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:42.812 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.341 Initializing NVMe Controllers 00:28:45.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.341 Controller IO queue size 128, less than required. 00:28:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.341 Controller IO queue size 128, less than required. 00:28:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:45.341 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:45.341 Initialization complete. Launching workers. 00:28:45.341 ======================================================== 00:28:45.341 Latency(us) 00:28:45.341 Device Information : IOPS MiB/s Average min max 00:28:45.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1619.43 404.86 80393.27 49515.71 159079.70 00:28:45.341 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.48 145.87 223696.74 110947.28 335086.95 00:28:45.341 ======================================================== 00:28:45.341 Total : 2202.91 550.73 118349.50 49515.71 335086.95 00:28:45.341 00:28:45.341 21:35:19 nvmf_tcp.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:28:45.341 EAL: No free 2048 kB hugepages reported on node 1 00:28:45.341 No valid NVMe controllers or AIO or URING devices found 00:28:45.341 Initializing NVMe Controllers 00:28:45.341 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:45.341 Controller IO queue size 128, less than required. 00:28:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:28:45.341 Controller IO queue size 128, less than required. 00:28:45.341 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:45.341 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:28:45.341 WARNING: Some requested NVMe devices were skipped 00:28:45.341 21:35:20 nvmf_tcp.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:28:45.341 EAL: No free 2048 kB hugepages reported on node 1 00:28:47.870 Initializing NVMe Controllers 00:28:47.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:47.870 Controller IO queue size 128, less than required. 00:28:47.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:47.870 Controller IO queue size 128, less than required. 00:28:47.870 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:47.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:47.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:47.870 Initialization complete. Launching workers. 00:28:47.870 00:28:47.870 ==================== 00:28:47.870 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:28:47.870 TCP transport: 00:28:47.870 polls: 14226 00:28:47.870 idle_polls: 8623 00:28:47.870 sock_completions: 5603 00:28:47.870 nvme_completions: 5361 00:28:47.870 submitted_requests: 7986 00:28:47.870 queued_requests: 1 00:28:47.870 00:28:47.870 ==================== 00:28:47.870 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:28:47.870 TCP transport: 00:28:47.870 polls: 13366 00:28:47.870 idle_polls: 8371 00:28:47.870 sock_completions: 4995 00:28:47.870 nvme_completions: 6427 00:28:47.870 submitted_requests: 9546 00:28:47.870 queued_requests: 1 00:28:47.870 ======================================================== 00:28:47.870 Latency(us) 00:28:47.870 Device Information : IOPS MiB/s Average min max 00:28:47.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1337.78 334.45 98401.44 62681.30 180092.03 00:28:47.870 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1603.84 400.96 80246.30 47816.50 116235.17 00:28:47.870 ======================================================== 00:28:47.870 Total : 2941.62 735.41 88502.84 47816.50 180092.03 00:28:47.870 00:28:47.870 21:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@66 -- # sync 00:28:47.870 21:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:48.128 21:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:28:48.128 21:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:88:00.0 ']' 00:28:48.128 21:35:22 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@72 -- # ls_guid=caa4550a-9ece-4926-b524-568adf78afe9 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb caa4550a-9ece-4926-b524-568adf78afe9 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=caa4550a-9ece-4926-b524-568adf78afe9 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:52.306 { 00:28:52.306 "uuid": "caa4550a-9ece-4926-b524-568adf78afe9", 00:28:52.306 "name": "lvs_0", 00:28:52.306 "base_bdev": "Nvme0n1", 00:28:52.306 "total_data_clusters": 238234, 00:28:52.306 "free_clusters": 238234, 00:28:52.306 "block_size": 512, 00:28:52.306 "cluster_size": 4194304 00:28:52.306 } 00:28:52.306 ]' 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="caa4550a-9ece-4926-b524-568adf78afe9") .free_clusters' 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=238234 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="caa4550a-9ece-4926-b524-568adf78afe9") .cluster_size' 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=952936 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 952936 00:28:52.306 952936 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u caa4550a-9ece-4926-b524-568adf78afe9 lbd_0 20480 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1f87e81c-dfc9-47f7-b1d2-ad0580146ea1 00:28:52.306 21:35:26 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1f87e81c-dfc9-47f7-b1d2-ad0580146ea1 lvs_n_0 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=2bef731c-31df-42db-8a5d-487a31c0033e 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 2bef731c-31df-42db-8a5d-487a31c0033e 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1364 -- # local lvs_uuid=2bef731c-31df-42db-8a5d-487a31c0033e 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1365 -- # local lvs_info 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1366 -- # local fc 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1367 -- # local cs 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:28:53.302 { 00:28:53.302 "uuid": "caa4550a-9ece-4926-b524-568adf78afe9", 00:28:53.302 "name": "lvs_0", 00:28:53.302 "base_bdev": "Nvme0n1", 00:28:53.302 "total_data_clusters": 238234, 00:28:53.302 "free_clusters": 233114, 00:28:53.302 "block_size": 512, 00:28:53.302 "cluster_size": 4194304 00:28:53.302 }, 00:28:53.302 { 00:28:53.302 "uuid": "2bef731c-31df-42db-8a5d-487a31c0033e", 00:28:53.302 "name": "lvs_n_0", 00:28:53.302 "base_bdev": "1f87e81c-dfc9-47f7-b1d2-ad0580146ea1", 00:28:53.302 "total_data_clusters": 5114, 00:28:53.302 "free_clusters": 5114, 00:28:53.302 "block_size": 512, 00:28:53.302 "cluster_size": 4194304 00:28:53.302 } 00:28:53.302 ]' 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="2bef731c-31df-42db-8a5d-487a31c0033e") .free_clusters' 00:28:53.302 21:35:27 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1369 -- # fc=5114 00:28:53.302 21:35:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="2bef731c-31df-42db-8a5d-487a31c0033e") .cluster_size' 00:28:53.302 21:35:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1370 -- # cs=4194304 00:28:53.302 21:35:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1373 -- # free_mb=20456 00:28:53.302 21:35:28 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1374 -- # echo 20456 00:28:53.302 20456 00:28:53.302 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:28:53.302 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2bef731c-31df-42db-8a5d-487a31c0033e lbd_nest_0 20456 00:28:53.560 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=8a51d5a1-fdfe-47e3-9c32-04288d49b5cc 00:28:53.560 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:53.818 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:28:53.818 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 8a51d5a1-fdfe-47e3-9c32-04288d49b5cc 00:28:54.075 21:35:28 nvmf_tcp.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:54.333 21:35:29 nvmf_tcp.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:28:54.333 21:35:29 nvmf_tcp.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:28:54.333 21:35:29 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:28:54.333 21:35:29 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:28:54.333 21:35:29 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:28:54.333 EAL: No free 2048 kB hugepages reported on node 1 00:29:06.511 Initializing NVMe Controllers 00:29:06.511 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:06.511 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:06.511 Initialization complete. Launching workers. 00:29:06.511 ======================================================== 00:29:06.511 Latency(us) 00:29:06.511 Device Information : IOPS MiB/s Average min max 00:29:06.511 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 45.30 0.02 22080.43 188.43 46726.95 00:29:06.511 ======================================================== 00:29:06.511 Total : 45.30 0.02 22080.43 188.43 46726.95 00:29:06.511 00:29:06.512 21:35:39 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:06.512 21:35:39 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:06.512 EAL: No free 2048 kB hugepages reported on node 1 00:29:16.476 Initializing NVMe Controllers 00:29:16.476 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:16.476 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:16.476 Initialization complete. Launching workers. 00:29:16.476 ======================================================== 00:29:16.476 Latency(us) 00:29:16.476 Device Information : IOPS MiB/s Average min max 00:29:16.476 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 78.10 9.76 12804.27 6003.00 47898.40 00:29:16.476 ======================================================== 00:29:16.476 Total : 78.10 9.76 12804.27 6003.00 47898.40 00:29:16.476 00:29:16.476 21:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:16.476 21:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:16.476 21:35:49 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:16.476 EAL: No free 2048 kB hugepages reported on node 1 00:29:26.446 Initializing NVMe Controllers 00:29:26.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:26.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:26.446 Initialization complete. Launching workers. 00:29:26.446 ======================================================== 00:29:26.446 Latency(us) 00:29:26.446 Device Information : IOPS MiB/s Average min max 00:29:26.446 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7670.69 3.75 4181.23 288.46 47842.96 00:29:26.446 ======================================================== 00:29:26.446 Total : 7670.69 3.75 4181.23 288.46 47842.96 00:29:26.446 00:29:26.446 21:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:26.446 21:36:00 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:26.446 EAL: No free 2048 kB hugepages reported on node 1 00:29:36.411 Initializing NVMe Controllers 00:29:36.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:36.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:36.411 Initialization complete. Launching workers. 00:29:36.411 ======================================================== 00:29:36.411 Latency(us) 00:29:36.411 Device Information : IOPS MiB/s Average min max 00:29:36.411 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 3498.60 437.32 9150.72 552.90 22186.29 00:29:36.411 ======================================================== 00:29:36.411 Total : 3498.60 437.32 9150.72 552.90 22186.29 00:29:36.411 00:29:36.411 21:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:29:36.411 21:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:36.411 21:36:10 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:36.411 EAL: No free 2048 kB hugepages reported on node 1 00:29:46.418 Initializing NVMe Controllers 00:29:46.418 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:46.418 Controller IO queue size 128, less than required. 00:29:46.418 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:46.418 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:46.418 Initialization complete. Launching workers. 00:29:46.418 ======================================================== 00:29:46.418 Latency(us) 00:29:46.418 Device Information : IOPS MiB/s Average min max 00:29:46.418 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11911.80 5.82 10750.03 1607.43 26813.74 00:29:46.418 ======================================================== 00:29:46.418 Total : 11911.80 5.82 10750.03 1607.43 26813.74 00:29:46.418 00:29:46.418 21:36:20 nvmf_tcp.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:29:46.418 21:36:20 nvmf_tcp.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:46.418 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.376 Initializing NVMe Controllers 00:29:56.376 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:56.376 Controller IO queue size 128, less than required. 00:29:56.376 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:56.376 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:56.376 Initialization complete. Launching workers. 00:29:56.376 ======================================================== 00:29:56.377 Latency(us) 00:29:56.377 Device Information : IOPS MiB/s Average min max 00:29:56.377 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1194.10 149.26 107702.71 16071.33 233623.28 00:29:56.377 ======================================================== 00:29:56.377 Total : 1194.10 149.26 107702.71 16071.33 233623.28 00:29:56.377 00:29:56.633 21:36:31 nvmf_tcp.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:56.891 21:36:31 nvmf_tcp.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8a51d5a1-fdfe-47e3-9c32-04288d49b5cc 00:29:57.825 21:36:32 nvmf_tcp.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:29:57.825 21:36:32 nvmf_tcp.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1f87e81c-dfc9-47f7-b1d2-ad0580146ea1 00:29:58.082 21:36:32 nvmf_tcp.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:58.340 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:58.340 rmmod nvme_tcp 00:29:58.340 rmmod nvme_fabrics 00:29:58.340 rmmod nvme_keyring 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 1002575 ']' 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 1002575 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 1002575 ']' 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 1002575 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1002575 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1002575' 00:29:58.597 killing process with pid 1002575 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@967 -- # kill 1002575 00:29:58.597 21:36:33 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@972 -- # wait 1002575 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:00.497 21:36:34 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.399 21:36:36 nvmf_tcp.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:02.399 00:30:02.399 real 1m31.143s 00:30:02.399 user 5m35.738s 00:30:02.399 sys 0m15.989s 00:30:02.400 21:36:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:02.400 21:36:36 nvmf_tcp.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:02.400 ************************************ 00:30:02.400 END TEST nvmf_perf 00:30:02.400 ************************************ 00:30:02.400 21:36:36 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:02.400 21:36:36 nvmf_tcp -- nvmf/nvmf.sh@99 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:02.400 21:36:36 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:02.400 21:36:36 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:02.400 21:36:36 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:02.400 ************************************ 00:30:02.400 START TEST nvmf_fio_host 00:30:02.400 ************************************ 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:30:02.400 * Looking for test storage... 00:30:02.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:30:02.400 21:36:36 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:04.304 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:04.304 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:04.304 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:04.304 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:04.304 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:04.305 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.305 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:30:04.305 00:30:04.305 --- 10.0.0.2 ping statistics --- 00:30:04.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.305 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.305 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.305 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:30:04.305 00:30:04.305 --- 10.0.0.1 ping statistics --- 00:30:04.305 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.305 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=1015170 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 1015170 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 1015170 ']' 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.305 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:04.305 21:36:38 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.305 [2024-07-11 21:36:38.954700] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:04.305 [2024-07-11 21:36:38.954800] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.305 EAL: No free 2048 kB hugepages reported on node 1 00:30:04.305 [2024-07-11 21:36:39.024811] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:04.563 [2024-07-11 21:36:39.120719] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.563 [2024-07-11 21:36:39.120811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.563 [2024-07-11 21:36:39.120835] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.563 [2024-07-11 21:36:39.120861] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.563 [2024-07-11 21:36:39.120874] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.563 [2024-07-11 21:36:39.124779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:04.563 [2024-07-11 21:36:39.124837] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:04.563 [2024-07-11 21:36:39.124901] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:04.563 [2024-07-11 21:36:39.124905] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.563 21:36:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:04.563 21:36:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:30:04.563 21:36:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:04.820 [2024-07-11 21:36:39.481283] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.820 21:36:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:30:04.820 21:36:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:04.820 21:36:39 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:04.820 21:36:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:30:05.079 Malloc1 00:30:05.079 21:36:39 nvmf_tcp.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:05.336 21:36:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:05.594 21:36:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:05.851 [2024-07-11 21:36:40.567876] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:05.851 21:36:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:06.109 21:36:40 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:06.366 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:06.366 fio-3.35 00:30:06.366 Starting 1 thread 00:30:06.366 EAL: No free 2048 kB hugepages reported on node 1 00:30:08.889 00:30:08.889 test: (groupid=0, jobs=1): err= 0: pid=1015577: Thu Jul 11 21:36:43 2024 00:30:08.889 read: IOPS=7924, BW=31.0MiB/s (32.5MB/s)(62.1MiB/2007msec) 00:30:08.889 slat (usec): min=2, max=161, avg= 2.71, stdev= 1.93 00:30:08.889 clat (usec): min=3216, max=15728, avg=8797.31, stdev=769.89 00:30:08.889 lat (usec): min=3235, max=15731, avg=8800.01, stdev=769.79 00:30:08.889 clat percentiles (usec): 00:30:08.889 | 1.00th=[ 7046], 5.00th=[ 7635], 10.00th=[ 7898], 20.00th=[ 8225], 00:30:08.889 | 30.00th=[ 8455], 40.00th=[ 8586], 50.00th=[ 8717], 60.00th=[ 8979], 00:30:08.889 | 70.00th=[ 9110], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10028], 00:30:08.889 | 99.00th=[10552], 99.50th=[10814], 99.90th=[14222], 99.95th=[14615], 00:30:08.889 | 99.99th=[15664] 00:30:08.889 bw ( KiB/s): min=30216, max=32224, per=99.89%, avg=31662.00, stdev=967.07, samples=4 00:30:08.889 iops : min= 7554, max= 8056, avg=7915.50, stdev=241.77, samples=4 00:30:08.889 write: IOPS=7894, BW=30.8MiB/s (32.3MB/s)(61.9MiB/2007msec); 0 zone resets 00:30:08.889 slat (usec): min=2, max=137, avg= 2.87, stdev= 1.68 00:30:08.889 clat (usec): min=2209, max=14343, avg=7333.77, stdev=665.32 00:30:08.889 lat (usec): min=2218, max=14345, avg=7336.63, stdev=665.29 00:30:08.889 clat percentiles (usec): 00:30:08.889 | 1.00th=[ 5866], 5.00th=[ 6325], 10.00th=[ 6521], 20.00th=[ 6849], 00:30:08.889 | 30.00th=[ 6980], 40.00th=[ 7177], 50.00th=[ 7308], 60.00th=[ 7504], 00:30:08.889 | 70.00th=[ 7635], 80.00th=[ 7832], 90.00th=[ 8094], 95.00th=[ 8291], 00:30:08.889 | 99.00th=[ 8848], 99.50th=[ 8979], 99.90th=[12518], 99.95th=[13698], 00:30:08.889 | 99.99th=[14353] 00:30:08.889 bw ( KiB/s): min=31288, max=31808, per=100.00%, avg=31582.00, stdev=222.12, samples=4 00:30:08.889 iops : min= 7822, max= 7952, avg=7895.50, stdev=55.53, samples=4 00:30:08.889 lat (msec) : 4=0.08%, 10=97.45%, 20=2.47% 00:30:08.889 cpu : usr=60.77%, sys=36.14%, ctx=58, majf=0, minf=7 00:30:08.889 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:30:08.889 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:08.889 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:08.889 issued rwts: total=15904,15845,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:08.889 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:08.889 00:30:08.889 Run status group 0 (all jobs): 00:30:08.889 READ: bw=31.0MiB/s (32.5MB/s), 31.0MiB/s-31.0MiB/s (32.5MB/s-32.5MB/s), io=62.1MiB (65.1MB), run=2007-2007msec 00:30:08.889 WRITE: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.9MiB (64.9MB), run=2007-2007msec 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:08.889 21:36:43 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:30:08.889 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:30:08.889 fio-3.35 00:30:08.889 Starting 1 thread 00:30:09.146 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.111 00:30:11.111 test: (groupid=0, jobs=1): err= 0: pid=1015975: Thu Jul 11 21:36:45 2024 00:30:11.111 read: IOPS=8449, BW=132MiB/s (138MB/s)(265MiB/2010msec) 00:30:11.111 slat (usec): min=2, max=114, avg= 4.01, stdev= 1.98 00:30:11.111 clat (usec): min=2314, max=16277, avg=8762.64, stdev=2099.23 00:30:11.111 lat (usec): min=2317, max=16283, avg=8766.65, stdev=2099.27 00:30:11.111 clat percentiles (usec): 00:30:11.111 | 1.00th=[ 4752], 5.00th=[ 5538], 10.00th=[ 6063], 20.00th=[ 6980], 00:30:11.111 | 30.00th=[ 7570], 40.00th=[ 8029], 50.00th=[ 8586], 60.00th=[ 9241], 00:30:11.111 | 70.00th=[ 9765], 80.00th=[10552], 90.00th=[11469], 95.00th=[12518], 00:30:11.111 | 99.00th=[14091], 99.50th=[14746], 99.90th=[15664], 99.95th=[15926], 00:30:11.111 | 99.99th=[16188] 00:30:11.111 bw ( KiB/s): min=61984, max=77504, per=51.44%, avg=69544.00, stdev=8681.91, samples=4 00:30:11.111 iops : min= 3874, max= 4844, avg=4346.50, stdev=542.62, samples=4 00:30:11.111 write: IOPS=4984, BW=77.9MiB/s (81.7MB/s)(142MiB/1826msec); 0 zone resets 00:30:11.111 slat (usec): min=30, max=194, avg=34.54, stdev= 5.81 00:30:11.111 clat (usec): min=6044, max=18322, avg=11126.70, stdev=1872.23 00:30:11.111 lat (usec): min=6076, max=18354, avg=11161.24, stdev=1872.68 00:30:11.111 clat percentiles (usec): 00:30:11.111 | 1.00th=[ 7570], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 9503], 00:30:11.111 | 30.00th=[10028], 40.00th=[10421], 50.00th=[10945], 60.00th=[11469], 00:30:11.111 | 70.00th=[11994], 80.00th=[12518], 90.00th=[13698], 95.00th=[14484], 00:30:11.111 | 99.00th=[16057], 99.50th=[16581], 99.90th=[17433], 99.95th=[17957], 00:30:11.111 | 99.99th=[18220] 00:30:11.111 bw ( KiB/s): min=63808, max=81024, per=90.89%, avg=72480.00, stdev=8884.75, samples=4 00:30:11.111 iops : min= 3988, max= 5064, avg=4530.00, stdev=555.30, samples=4 00:30:11.111 lat (msec) : 4=0.20%, 10=57.85%, 20=41.96% 00:30:11.111 cpu : usr=75.57%, sys=22.19%, ctx=34, majf=0, minf=3 00:30:11.111 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:30:11.111 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:11.111 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:11.111 issued rwts: total=16984,9101,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:11.111 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:11.111 00:30:11.111 Run status group 0 (all jobs): 00:30:11.111 READ: bw=132MiB/s (138MB/s), 132MiB/s-132MiB/s (138MB/s-138MB/s), io=265MiB (278MB), run=2010-2010msec 00:30:11.111 WRITE: bw=77.9MiB/s (81.7MB/s), 77.9MiB/s-77.9MiB/s (81.7MB/s-81.7MB/s), io=142MiB (149MB), run=1826-1826msec 00:30:11.370 21:36:45 nvmf_tcp.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # bdfs=() 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1513 -- # local bdfs 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:11.370 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:30:11.628 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:30:11.628 21:36:46 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:30:11.628 21:36:46 nvmf_tcp.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 -i 10.0.0.2 00:30:14.914 Nvme0n1 00:30:14.914 21:36:49 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=60dcea5b-6087-414a-8d0c-f08f5d09aa96 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 60dcea5b-6087-414a-8d0c-f08f5d09aa96 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=60dcea5b-6087-414a-8d0c-f08f5d09aa96 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:17.449 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:17.707 { 00:30:17.707 "uuid": "60dcea5b-6087-414a-8d0c-f08f5d09aa96", 00:30:17.707 "name": "lvs_0", 00:30:17.707 "base_bdev": "Nvme0n1", 00:30:17.707 "total_data_clusters": 930, 00:30:17.707 "free_clusters": 930, 00:30:17.707 "block_size": 512, 00:30:17.707 "cluster_size": 1073741824 00:30:17.707 } 00:30:17.707 ]' 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="60dcea5b-6087-414a-8d0c-f08f5d09aa96") .free_clusters' 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=930 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="60dcea5b-6087-414a-8d0c-f08f5d09aa96") .cluster_size' 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=1073741824 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=952320 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 952320 00:30:17.707 952320 00:30:17.707 21:36:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:30:18.272 b116dbe1-8253-480e-8d8d-0b9532c26afa 00:30:18.272 21:36:52 nvmf_tcp.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:30:18.528 21:36:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:30:18.785 21:36:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:19.042 21:36:53 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:19.300 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:19.300 fio-3.35 00:30:19.300 Starting 1 thread 00:30:19.300 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.826 00:30:21.826 test: (groupid=0, jobs=1): err= 0: pid=1017259: Thu Jul 11 21:36:56 2024 00:30:21.826 read: IOPS=6008, BW=23.5MiB/s (24.6MB/s)(47.1MiB/2008msec) 00:30:21.826 slat (nsec): min=1932, max=144427, avg=2687.08, stdev=2149.04 00:30:21.826 clat (usec): min=934, max=171112, avg=11666.69, stdev=11630.55 00:30:21.826 lat (usec): min=937, max=171149, avg=11669.38, stdev=11630.81 00:30:21.826 clat percentiles (msec): 00:30:21.826 | 1.00th=[ 9], 5.00th=[ 10], 10.00th=[ 10], 20.00th=[ 11], 00:30:21.826 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 12], 00:30:21.826 | 70.00th=[ 12], 80.00th=[ 12], 90.00th=[ 12], 95.00th=[ 13], 00:30:21.826 | 99.00th=[ 13], 99.50th=[ 157], 99.90th=[ 171], 99.95th=[ 171], 00:30:21.826 | 99.99th=[ 171] 00:30:21.826 bw ( KiB/s): min=16846, max=26432, per=99.80%, avg=23987.50, stdev=4761.38, samples=4 00:30:21.826 iops : min= 4211, max= 6608, avg=5996.75, stdev=1190.59, samples=4 00:30:21.826 write: IOPS=5995, BW=23.4MiB/s (24.6MB/s)(47.0MiB/2008msec); 0 zone resets 00:30:21.826 slat (usec): min=2, max=106, avg= 2.79, stdev= 1.65 00:30:21.826 clat (usec): min=293, max=169340, avg=9495.80, stdev=10917.46 00:30:21.826 lat (usec): min=296, max=169345, avg=9498.60, stdev=10917.68 00:30:21.827 clat percentiles (msec): 00:30:21.827 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 8], 20.00th=[ 9], 00:30:21.827 | 30.00th=[ 9], 40.00th=[ 9], 50.00th=[ 9], 60.00th=[ 9], 00:30:21.827 | 70.00th=[ 10], 80.00th=[ 10], 90.00th=[ 10], 95.00th=[ 11], 00:30:21.827 | 99.00th=[ 11], 99.50th=[ 15], 99.90th=[ 169], 99.95th=[ 169], 00:30:21.827 | 99.99th=[ 169] 00:30:21.827 bw ( KiB/s): min=17860, max=26112, per=99.87%, avg=23951.00, stdev=4062.98, samples=4 00:30:21.827 iops : min= 4465, max= 6528, avg=5987.75, stdev=1015.74, samples=4 00:30:21.827 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:30:21.827 lat (msec) : 2=0.03%, 4=0.13%, 10=56.47%, 20=42.83%, 250=0.53% 00:30:21.827 cpu : usr=54.21%, sys=43.00%, ctx=107, majf=0, minf=25 00:30:21.827 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:21.827 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:21.827 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:21.827 issued rwts: total=12066,12039,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:21.827 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:21.827 00:30:21.827 Run status group 0 (all jobs): 00:30:21.827 READ: bw=23.5MiB/s (24.6MB/s), 23.5MiB/s-23.5MiB/s (24.6MB/s-24.6MB/s), io=47.1MiB (49.4MB), run=2008-2008msec 00:30:21.827 WRITE: bw=23.4MiB/s (24.6MB/s), 23.4MiB/s-23.4MiB/s (24.6MB/s-24.6MB/s), io=47.0MiB (49.3MB), run=2008-2008msec 00:30:21.827 21:36:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:30:21.827 21:36:56 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=952da816-3749-4fe1-93e1-04aa951e6ba0 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 952da816-3749-4fe1-93e1-04aa951e6ba0 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1364 -- # local lvs_uuid=952da816-3749-4fe1-93e1-04aa951e6ba0 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1365 -- # local lvs_info 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1366 -- # local fc 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1367 -- # local cs 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1368 -- # lvs_info='[ 00:30:23.205 { 00:30:23.205 "uuid": "60dcea5b-6087-414a-8d0c-f08f5d09aa96", 00:30:23.205 "name": "lvs_0", 00:30:23.205 "base_bdev": "Nvme0n1", 00:30:23.205 "total_data_clusters": 930, 00:30:23.205 "free_clusters": 0, 00:30:23.205 "block_size": 512, 00:30:23.205 "cluster_size": 1073741824 00:30:23.205 }, 00:30:23.205 { 00:30:23.205 "uuid": "952da816-3749-4fe1-93e1-04aa951e6ba0", 00:30:23.205 "name": "lvs_n_0", 00:30:23.205 "base_bdev": "b116dbe1-8253-480e-8d8d-0b9532c26afa", 00:30:23.205 "total_data_clusters": 237847, 00:30:23.205 "free_clusters": 237847, 00:30:23.205 "block_size": 512, 00:30:23.205 "cluster_size": 4194304 00:30:23.205 } 00:30:23.205 ]' 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # jq '.[] | select(.uuid=="952da816-3749-4fe1-93e1-04aa951e6ba0") .free_clusters' 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1369 -- # fc=237847 00:30:23.205 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # jq '.[] | select(.uuid=="952da816-3749-4fe1-93e1-04aa951e6ba0") .cluster_size' 00:30:23.463 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1370 -- # cs=4194304 00:30:23.463 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1373 -- # free_mb=951388 00:30:23.463 21:36:57 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1374 -- # echo 951388 00:30:23.463 951388 00:30:23.463 21:36:57 nvmf_tcp.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:30:24.027 cc7045ba-b68b-4ba2-9abf-a31614782398 00:30:24.027 21:36:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:30:24.284 21:36:58 nvmf_tcp.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:30:24.542 21:36:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:30:24.800 21:36:59 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:30:25.057 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:30:25.057 fio-3.35 00:30:25.057 Starting 1 thread 00:30:25.057 EAL: No free 2048 kB hugepages reported on node 1 00:30:27.579 00:30:27.579 test: (groupid=0, jobs=1): err= 0: pid=1017996: Thu Jul 11 21:37:02 2024 00:30:27.579 read: IOPS=5816, BW=22.7MiB/s (23.8MB/s)(45.6MiB/2009msec) 00:30:27.579 slat (usec): min=2, max=118, avg= 2.72, stdev= 1.79 00:30:27.579 clat (usec): min=4355, max=20389, avg=12073.84, stdev=1078.22 00:30:27.579 lat (usec): min=4360, max=20391, avg=12076.56, stdev=1078.11 00:30:27.579 clat percentiles (usec): 00:30:27.579 | 1.00th=[ 9634], 5.00th=[10421], 10.00th=[10814], 20.00th=[11207], 00:30:27.579 | 30.00th=[11600], 40.00th=[11863], 50.00th=[11994], 60.00th=[12256], 00:30:27.579 | 70.00th=[12649], 80.00th=[12911], 90.00th=[13304], 95.00th=[13698], 00:30:27.579 | 99.00th=[14484], 99.50th=[14746], 99.90th=[17695], 99.95th=[19268], 00:30:27.579 | 99.99th=[20317] 00:30:27.579 bw ( KiB/s): min=21848, max=23848, per=99.86%, avg=23234.00, stdev=935.04, samples=4 00:30:27.579 iops : min= 5462, max= 5962, avg=5808.50, stdev=233.76, samples=4 00:30:27.579 write: IOPS=5800, BW=22.7MiB/s (23.8MB/s)(45.5MiB/2009msec); 0 zone resets 00:30:27.579 slat (usec): min=2, max=108, avg= 2.86, stdev= 1.53 00:30:27.579 clat (usec): min=2053, max=17430, avg=9770.92, stdev=905.94 00:30:27.579 lat (usec): min=2070, max=17432, avg=9773.78, stdev=905.88 00:30:27.579 clat percentiles (usec): 00:30:27.579 | 1.00th=[ 7701], 5.00th=[ 8455], 10.00th=[ 8717], 20.00th=[ 9110], 00:30:27.579 | 30.00th=[ 9372], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10028], 00:30:27.579 | 70.00th=[10159], 80.00th=[10421], 90.00th=[10814], 95.00th=[11076], 00:30:27.579 | 99.00th=[11731], 99.50th=[12125], 99.90th=[14877], 99.95th=[16319], 00:30:27.579 | 99.99th=[17433] 00:30:27.579 bw ( KiB/s): min=22872, max=23424, per=99.95%, avg=23190.00, stdev=230.51, samples=4 00:30:27.579 iops : min= 5718, max= 5856, avg=5797.50, stdev=57.63, samples=4 00:30:27.579 lat (msec) : 4=0.05%, 10=31.89%, 20=68.04%, 50=0.02% 00:30:27.579 cpu : usr=59.51%, sys=37.80%, ctx=113, majf=0, minf=25 00:30:27.579 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.7% 00:30:27.579 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:27.579 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:30:27.579 issued rwts: total=11686,11653,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:27.579 latency : target=0, window=0, percentile=100.00%, depth=128 00:30:27.579 00:30:27.579 Run status group 0 (all jobs): 00:30:27.579 READ: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.6MiB (47.9MB), run=2009-2009msec 00:30:27.579 WRITE: bw=22.7MiB/s (23.8MB/s), 22.7MiB/s-22.7MiB/s (23.8MB/s-23.8MB/s), io=45.5MiB (47.7MB), run=2009-2009msec 00:30:27.579 21:37:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:30:27.579 21:37:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:30:27.579 21:37:02 nvmf_tcp.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_n_0/lbd_nest_0 00:30:31.763 21:37:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:30:31.763 21:37:06 nvmf_tcp.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:30:35.075 21:37:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:30:35.075 21:37:09 nvmf_tcp.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:36.977 rmmod nvme_tcp 00:30:36.977 rmmod nvme_fabrics 00:30:36.977 rmmod nvme_keyring 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 1015170 ']' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 1015170 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 1015170 ']' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 1015170 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1015170 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1015170' 00:30:36.977 killing process with pid 1015170 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 1015170 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 1015170 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:36.977 21:37:11 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.514 21:37:13 nvmf_tcp.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:39.514 00:30:39.514 real 0m36.897s 00:30:39.514 user 2m21.477s 00:30:39.514 sys 0m7.104s 00:30:39.514 21:37:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:39.514 21:37:13 nvmf_tcp.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:30:39.514 ************************************ 00:30:39.514 END TEST nvmf_fio_host 00:30:39.514 ************************************ 00:30:39.514 21:37:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:30:39.514 21:37:13 nvmf_tcp -- nvmf/nvmf.sh@100 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:39.514 21:37:13 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:30:39.514 21:37:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:39.514 21:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:30:39.514 ************************************ 00:30:39.514 START TEST nvmf_failover 00:30:39.514 ************************************ 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:30:39.514 * Looking for test storage... 00:30:39.514 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:30:39.514 21:37:13 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:30:41.414 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:30:41.414 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.414 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:30:41.415 Found net devices under 0000:0a:00.0: cvl_0_0 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:30:41.415 Found net devices under 0000:0a:00.1: cvl_0_1 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:30:41.415 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:41.415 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.122 ms 00:30:41.415 00:30:41.415 --- 10.0.0.2 ping statistics --- 00:30:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.415 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:41.415 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:41.415 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.176 ms 00:30:41.415 00:30:41.415 --- 10.0.0.1 ping statistics --- 00:30:41.415 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:41.415 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=1021239 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 1021239 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1021239 ']' 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.415 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:41.415 21:37:15 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:41.415 [2024-07-11 21:37:15.934386] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:30:41.415 [2024-07-11 21:37:15.934472] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:41.415 EAL: No free 2048 kB hugepages reported on node 1 00:30:41.415 [2024-07-11 21:37:15.998168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:30:41.415 [2024-07-11 21:37:16.082303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:41.415 [2024-07-11 21:37:16.082373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:41.415 [2024-07-11 21:37:16.082397] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:41.415 [2024-07-11 21:37:16.082408] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:41.415 [2024-07-11 21:37:16.082424] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:41.415 [2024-07-11 21:37:16.082518] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:30:41.415 [2024-07-11 21:37:16.082593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:30:41.415 [2024-07-11 21:37:16.082595] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:41.671 21:37:16 nvmf_tcp.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:41.927 [2024-07-11 21:37:16.475579] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:41.927 21:37:16 nvmf_tcp.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:30:42.183 Malloc0 00:30:42.183 21:37:16 nvmf_tcp.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.441 21:37:17 nvmf_tcp.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:42.700 21:37:17 nvmf_tcp.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.957 [2024-07-11 21:37:17.613253] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:42.957 21:37:17 nvmf_tcp.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:43.214 [2024-07-11 21:37:17.857962] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:43.214 21:37:17 nvmf_tcp.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:43.470 [2024-07-11 21:37:18.098871] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=1021527 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 1021527 /var/tmp/bdevperf.sock 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1021527 ']' 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:43.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:43.470 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:30:43.725 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:43.725 21:37:18 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:30:43.725 21:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.288 NVMe0n1 00:30:44.288 21:37:18 nvmf_tcp.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:44.854 00:30:44.854 21:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=1021659 00:30:44.854 21:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:44.854 21:37:19 nvmf_tcp.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:30:45.791 21:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:46.051 21:37:20 nvmf_tcp.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:30:49.372 21:37:23 nvmf_tcp.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:49.372 00:30:49.372 21:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:49.629 21:37:24 nvmf_tcp.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:30:52.915 21:37:27 nvmf_tcp.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:52.915 [2024-07-11 21:37:27.612251] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:52.915 21:37:27 nvmf_tcp.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:30:53.879 21:37:28 nvmf_tcp.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:30:54.443 21:37:28 nvmf_tcp.nvmf_failover -- host/failover.sh@59 -- # wait 1021659 00:31:01.012 0 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@61 -- # killprocess 1021527 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1021527 ']' 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1021527 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1021527 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1021527' 00:31:01.012 killing process with pid 1021527 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1021527 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1021527 00:31:01.012 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:01.012 [2024-07-11 21:37:18.163385] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:01.012 [2024-07-11 21:37:18.163478] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1021527 ] 00:31:01.012 EAL: No free 2048 kB hugepages reported on node 1 00:31:01.012 [2024-07-11 21:37:18.224210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:01.012 [2024-07-11 21:37:18.312101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.012 Running I/O for 15 seconds... 00:31:01.012 [2024-07-11 21:37:20.616407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:76040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:76048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:76056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:76064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:76072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:76080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:76088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:76096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.012 [2024-07-11 21:37:20.616733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:76104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.012 [2024-07-11 21:37:20.616748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:76112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.616807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:76120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.616838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:76128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.616880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:76136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.616908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:76144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.616936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:76152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.616964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.616980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:75384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.616994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:75392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.617021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:75400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.617050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:76160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:76168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:76176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:76184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:76192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:76200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:76208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:76216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:76224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:76232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:76240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:76248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:76256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:76264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:76272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:76280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:76288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:76296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:76304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:76312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:76320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:76328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:76336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:76344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:76352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:76360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:76368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:76376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:76384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:76392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.013 [2024-07-11 21:37:20.617950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:75408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.617982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.617999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:75416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:75432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:75440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:75448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:75456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:75464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:75472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:75480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.013 [2024-07-11 21:37:20.618250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.013 [2024-07-11 21:37:20.618264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:75488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:75496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:75504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:75512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:75520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:76400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.014 [2024-07-11 21:37:20.618419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:75536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:75544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:75552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:75560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:75568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:75576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:75584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:75592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:75600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:75608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:75616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:75624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:75632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:75640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:75648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:75656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:75664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:75672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.618982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.618998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:75680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:75688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:75696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:75712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:75720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:75736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:75744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:75752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:75760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:75768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:75776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:75784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:75800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:75808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:75816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:75824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:75832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:75840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:75848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:75856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:75864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.014 [2024-07-11 21:37:20.619680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.014 [2024-07-11 21:37:20.619694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:75872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:75880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:75896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:75904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:75912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:75920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:75928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:75936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.619974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.619988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:75952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:75960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:75968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:75976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:75984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:75992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:76000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:76016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:20.620281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620311] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.015 [2024-07-11 21:37:20.620327] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.015 [2024-07-11 21:37:20.620339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:76032 len:8 PRP1 0x0 PRP2 0x0 00:31:01.015 [2024-07-11 21:37:20.620352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620417] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xdef250 was disconnected and freed. reset controller. 00:31:01.015 [2024-07-11 21:37:20.620437] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:01.015 [2024-07-11 21:37:20.620471] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.015 [2024-07-11 21:37:20.620490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620506] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.015 [2024-07-11 21:37:20.620519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.015 [2024-07-11 21:37:20.620548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620562] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.015 [2024-07-11 21:37:20.620576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:20.620600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:01.015 [2024-07-11 21:37:20.620650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8bd0 (9): Bad file descriptor 00:31:01.015 [2024-07-11 21:37:20.623911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:01.015 [2024-07-11 21:37:20.653040] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.015 [2024-07-11 21:37:24.316267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:77800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:77808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:77816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:77824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:77832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:77840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:77848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:77856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:77864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:77872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:77880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:77888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:77896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:77904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:77912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:77920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:77928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:77936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:77944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:77952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.316977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:77960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.316990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.317004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:77968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.317017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.317032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:77976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.317045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.317075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:77984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.317088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.317101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:77992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.317114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.317128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:78000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.015 [2024-07-11 21:37:24.317141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.015 [2024-07-11 21:37:24.317155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:78008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:78016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:78024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:78032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:78040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:78048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:78056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:78064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:78072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:78080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:78088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:78104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.016 [2024-07-11 21:37:24.317493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:78128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:78136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:78144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:78152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:78160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:78168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:78176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:78184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:78192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:78200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:78208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:78216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:78224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:78232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:78240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:78248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.317974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:78256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.317987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:78264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:78272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:78280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:78288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:78296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:78304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:78312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:78320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:78328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:78336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:78344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:78352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.016 [2024-07-11 21:37:24.318335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:78360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.016 [2024-07-11 21:37:24.318348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:78368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:78376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:78384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:78392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:78400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:78408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:78416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:78424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:78432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:78440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:78448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:78456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:78464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:78472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:78480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:78488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:78496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:78504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:78512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:78520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:78528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.318964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.318982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:78536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:78544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:78552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:78560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:78568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:78576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:78584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:78592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:78600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:78608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:78616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:78624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:78632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:78640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:78656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:78664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:78672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:78680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:78688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:78696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:78704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:78712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:78720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:78728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:78736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:78744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:78752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:78760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.017 [2024-07-11 21:37:24.319831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319867] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.017 [2024-07-11 21:37:24.319885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78768 len:8 PRP1 0x0 PRP2 0x0 00:31:01.017 [2024-07-11 21:37:24.319899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.017 [2024-07-11 21:37:24.319917] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.017 [2024-07-11 21:37:24.319929] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.319941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78776 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.319953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.319966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.319977] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.319988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78784 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320013] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.320024] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.320035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78792 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320060] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.320086] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.320097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78800 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320122] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.320132] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.320143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78808 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.320183] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.320194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:78816 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320218] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.320228] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.320239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78112 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320263] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.018 [2024-07-11 21:37:24.320273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.018 [2024-07-11 21:37:24.320284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:78120 len:8 PRP1 0x0 PRP2 0x0 00:31:01.018 [2024-07-11 21:37:24.320296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320355] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf93a00 was disconnected and freed. reset controller. 00:31:01.018 [2024-07-11 21:37:24.320373] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:31:01.018 [2024-07-11 21:37:24.320406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.018 [2024-07-11 21:37:24.320440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320456] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.018 [2024-07-11 21:37:24.320469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320482] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.018 [2024-07-11 21:37:24.320496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.018 [2024-07-11 21:37:24.320523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:24.320536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:01.018 [2024-07-11 21:37:24.320574] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8bd0 (9): Bad file descriptor 00:31:01.018 [2024-07-11 21:37:24.324043] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:01.018 [2024-07-11 21:37:24.440571] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.018 [2024-07-11 21:37:28.889454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:18728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:18760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:18768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:18776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:18784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:18792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:18800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:18808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:18832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.889983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:18840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.889996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:18848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:18872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:18896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:18904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:18920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:18928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:18936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:18952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:18968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:18976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.018 [2024-07-11 21:37:28.890474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.018 [2024-07-11 21:37:28.890488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.019 [2024-07-11 21:37:28.890501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.019 [2024-07-11 21:37:28.890528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:19000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.019 [2024-07-11 21:37:28.890554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:19008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.019 [2024-07-11 21:37:28.890581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.019 [2024-07-11 21:37:28.890607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:19024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:31:01.019 [2024-07-11 21:37:28.890637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:19056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:19064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:19080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:19088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:19096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:19104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:19112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:19120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:19128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.890981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.890994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:19152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:19160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:19176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:19184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:19200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:19208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:19216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:19224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:19232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:19240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:19248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:19264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:19272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:19280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:19288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:19296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:19312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:19320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:19336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:19344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.019 [2024-07-11 21:37:28.891750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:19352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.019 [2024-07-11 21:37:28.891771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:19360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:19368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:19384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:19392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:19400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.891978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:19416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.891991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:31:01.020 [2024-07-11 21:37:28.892019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892056] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19432 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892105] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892117] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19440 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892160] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892170] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19448 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892217] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892253] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892264] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19464 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892300] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892310] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19472 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892347] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19480 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892393] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892404] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19488 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892440] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892451] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19496 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892500] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19504 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892536] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892547] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19512 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892582] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892593] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892628] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892638] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19528 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892673] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19536 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892719] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19544 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892804] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892841] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892853] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19560 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892893] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892904] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19568 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.892963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19576 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.892976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.892989] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.892999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19584 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.893023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.893036] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.893046] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19592 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.893084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.893098] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.893109] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19600 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.893132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.893145] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.893155] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19608 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.893184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.893197] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.893207] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.893230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.893242] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.893253] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19624 len:8 PRP1 0x0 PRP2 0x0 00:31:01.020 [2024-07-11 21:37:28.893279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.020 [2024-07-11 21:37:28.893293] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.020 [2024-07-11 21:37:28.893303] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.020 [2024-07-11 21:37:28.893314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19632 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893348] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19640 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893384] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893394] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19648 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893430] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893440] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19656 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893475] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893486] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19664 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19672 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893573] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893583] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893619] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893635] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19688 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893675] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893685] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19696 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893720] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893731] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19704 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893792] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893840] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893851] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19720 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893887] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893898] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19728 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893934] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.893961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19736 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.893974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.893987] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.893998] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.894009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19032 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.894022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.894035] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:31:01.021 [2024-07-11 21:37:28.894054] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:31:01.021 [2024-07-11 21:37:28.894082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19040 len:8 PRP1 0x0 PRP2 0x0 00:31:01.021 [2024-07-11 21:37:28.894095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.894153] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf937f0 was disconnected and freed. reset controller. 00:31:01.021 [2024-07-11 21:37:28.894170] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:31:01.021 [2024-07-11 21:37:28.894204] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.021 [2024-07-11 21:37:28.894238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.894253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.021 [2024-07-11 21:37:28.894266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.894281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.021 [2024-07-11 21:37:28.894294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.894307] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:01.021 [2024-07-11 21:37:28.894320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:01.021 [2024-07-11 21:37:28.894333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:01.021 [2024-07-11 21:37:28.894385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xdc8bd0 (9): Bad file descriptor 00:31:01.021 [2024-07-11 21:37:28.897564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:01.021 [2024-07-11 21:37:28.931769] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:01.021 00:31:01.021 Latency(us) 00:31:01.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.021 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:01.021 Verification LBA range: start 0x0 length 0x4000 00:31:01.021 NVMe0n1 : 15.01 8210.48 32.07 468.15 0.00 14719.32 558.27 17864.63 00:31:01.021 =================================================================================================================== 00:31:01.021 Total : 8210.48 32.07 468.15 0.00 14719.32 558.27 17864.63 00:31:01.021 Received shutdown signal, test time was about 15.000000 seconds 00:31:01.021 00:31:01.021 Latency(us) 00:31:01.021 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:01.021 =================================================================================================================== 00:31:01.021 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@65 -- # count=3 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=1023493 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 1023493 /var/tmp/bdevperf.sock 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 1023493 ']' 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:01.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:01.021 21:37:34 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:01.021 21:37:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:01.021 21:37:35 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:31:01.021 21:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:01.021 [2024-07-11 21:37:35.264553] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:01.021 21:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:31:01.021 [2024-07-11 21:37:35.517266] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:31:01.021 21:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.279 NVMe0n1 00:31:01.279 21:37:35 nvmf_tcp.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.536 00:31:01.536 21:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.102 00:31:02.102 21:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:02.102 21:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:31:02.358 21:37:36 nvmf_tcp.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:02.617 21:37:37 nvmf_tcp.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:31:05.913 21:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:05.913 21:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:31:05.913 21:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=1024164 00:31:05.913 21:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:05.913 21:37:40 nvmf_tcp.nvmf_failover -- host/failover.sh@92 -- # wait 1024164 00:31:06.847 0 00:31:06.847 21:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:06.848 [2024-07-11 21:37:34.794345] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:06.848 [2024-07-11 21:37:34.794453] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1023493 ] 00:31:06.848 EAL: No free 2048 kB hugepages reported on node 1 00:31:06.848 [2024-07-11 21:37:34.858298] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:06.848 [2024-07-11 21:37:34.942640] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:06.848 [2024-07-11 21:37:37.181834] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:31:06.848 [2024-07-11 21:37:37.181914] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.848 [2024-07-11 21:37:37.181937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.848 [2024-07-11 21:37:37.181953] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.848 [2024-07-11 21:37:37.181967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.848 [2024-07-11 21:37:37.181981] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.848 [2024-07-11 21:37:37.181994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.848 [2024-07-11 21:37:37.182007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:06.848 [2024-07-11 21:37:37.182021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:06.848 [2024-07-11 21:37:37.182035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:31:06.848 [2024-07-11 21:37:37.182077] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:31:06.848 [2024-07-11 21:37:37.182109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x776bd0 (9): Bad file descriptor 00:31:06.848 [2024-07-11 21:37:37.192302] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:31:06.848 Running I/O for 1 seconds... 00:31:06.848 00:31:06.848 Latency(us) 00:31:06.848 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:06.848 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:31:06.848 Verification LBA range: start 0x0 length 0x4000 00:31:06.848 NVMe0n1 : 1.00 8558.65 33.43 0.00 0.00 14894.46 782.79 12281.93 00:31:06.848 =================================================================================================================== 00:31:06.848 Total : 8558.65 33.43 0.00 0.00 14894.46 782.79 12281.93 00:31:06.848 21:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:06.848 21:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:31:07.105 21:37:41 nvmf_tcp.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:07.362 21:37:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:07.362 21:37:42 nvmf_tcp.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:31:07.620 21:37:42 nvmf_tcp.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:07.876 21:37:42 nvmf_tcp.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:31:11.149 21:37:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:31:11.149 21:37:45 nvmf_tcp.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:31:11.149 21:37:45 nvmf_tcp.nvmf_failover -- host/failover.sh@108 -- # killprocess 1023493 00:31:11.149 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1023493 ']' 00:31:11.149 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1023493 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1023493 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1023493' 00:31:11.150 killing process with pid 1023493 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1023493 00:31:11.150 21:37:45 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1023493 00:31:11.406 21:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@110 -- # sync 00:31:11.406 21:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:11.663 rmmod nvme_tcp 00:31:11.663 rmmod nvme_fabrics 00:31:11.663 rmmod nvme_keyring 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 1021239 ']' 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 1021239 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 1021239 ']' 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 1021239 00:31:11.663 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1021239 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1021239' 00:31:11.921 killing process with pid 1021239 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@967 -- # kill 1021239 00:31:11.921 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@972 -- # wait 1021239 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:12.179 21:37:46 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.077 21:37:48 nvmf_tcp.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:14.077 00:31:14.077 real 0m34.910s 00:31:14.077 user 2m1.201s 00:31:14.077 sys 0m6.815s 00:31:14.077 21:37:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:14.077 21:37:48 nvmf_tcp.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:31:14.077 ************************************ 00:31:14.077 END TEST nvmf_failover 00:31:14.077 ************************************ 00:31:14.077 21:37:48 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:14.077 21:37:48 nvmf_tcp -- nvmf/nvmf.sh@101 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:14.077 21:37:48 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:14.077 21:37:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:14.077 21:37:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:14.077 ************************************ 00:31:14.077 START TEST nvmf_host_discovery 00:31:14.077 ************************************ 00:31:14.077 21:37:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:31:14.077 * Looking for test storage... 00:31:14.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:14.077 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.077 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:31:14.077 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.077 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.078 21:37:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:14.336 21:37:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:14.337 21:37:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:14.337 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:14.337 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:14.337 21:37:48 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:31:14.337 21:37:48 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:16.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:16.239 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:16.239 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:16.239 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:16.239 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:16.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:16.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.123 ms 00:31:16.240 00:31:16.240 --- 10.0.0.2 ping statistics --- 00:31:16.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.240 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:16.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:16.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:31:16.240 00:31:16.240 --- 10.0.0.1 ping statistics --- 00:31:16.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:16.240 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=1026763 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 1026763 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1026763 ']' 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:16.240 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:16.240 21:37:50 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.240 [2024-07-11 21:37:50.965415] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:16.240 [2024-07-11 21:37:50.965503] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:16.240 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.504 [2024-07-11 21:37:51.030645] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.504 [2024-07-11 21:37:51.114295] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:16.504 [2024-07-11 21:37:51.114350] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:16.504 [2024-07-11 21:37:51.114375] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:16.504 [2024-07-11 21:37:51.114387] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:16.504 [2024-07-11 21:37:51.114397] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:16.504 [2024-07-11 21:37:51.114422] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 [2024-07-11 21:37:51.255295] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.504 [2024-07-11 21:37:51.263505] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.504 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.765 null0 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.765 null1 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=1026791 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 1026791 /tmp/host.sock 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 1026791 ']' 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:31:16.765 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:16.765 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:16.765 [2024-07-11 21:37:51.340807] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:16.765 [2024-07-11 21:37:51.340892] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1026791 ] 00:31:16.765 EAL: No free 2048 kB hugepages reported on node 1 00:31:16.765 [2024-07-11 21:37:51.406933] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:16.765 [2024-07-11 21:37:51.499156] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.022 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:17.023 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 [2024-07-11 21:37:51.917301] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:17.280 21:37:51 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.280 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:31:17.537 21:37:52 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:18.157 [2024-07-11 21:37:52.684439] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:18.157 [2024-07-11 21:37:52.684481] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:18.157 [2024-07-11 21:37:52.684509] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:18.157 [2024-07-11 21:37:52.770765] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:31:18.417 [2024-07-11 21:37:52.954912] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:18.417 [2024-07-11 21:37:52.954939] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:18.417 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:18.418 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:18.419 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:18.420 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:18.420 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.420 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.420 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:18.420 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 [2024-07-11 21:37:53.353847] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:18.683 [2024-07-11 21:37:53.354906] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:18.683 [2024-07-11 21:37:53.354959] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.683 [2024-07-11 21:37:53.441570] bdev_nvme.c:6907:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:18.683 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:18.684 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:18.684 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:18.684 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:18.684 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:18.684 21:37:53 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:18.941 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:18.941 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:31:18.941 21:37:53 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:18.941 [2024-07-11 21:37:53.547246] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:18.941 [2024-07-11 21:37:53.547274] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:31:18.941 [2024-07-11 21:37:53.547285] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.873 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.873 [2024-07-11 21:37:54.586008] bdev_nvme.c:6965:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:31:19.873 [2024-07-11 21:37:54.586057] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:19.873 [2024-07-11 21:37:54.586181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.873 [2024-07-11 21:37:54.586215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.873 [2024-07-11 21:37:54.586234] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.873 [2024-07-11 21:37:54.586250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.873 [2024-07-11 21:37:54.586266] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.873 [2024-07-11 21:37:54.586281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.873 [2024-07-11 21:37:54.586297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:31:19.873 [2024-07-11 21:37:54.586311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:31:19.874 [2024-07-11 21:37:54.586326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:19.874 [2024-07-11 21:37:54.596310] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:19.874 [2024-07-11 21:37:54.606352] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:19.874 [2024-07-11 21:37:54.606575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.874 [2024-07-11 21:37:54.606609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:19.874 [2024-07-11 21:37:54.606628] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:19.874 [2024-07-11 21:37:54.606654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:19.874 [2024-07-11 21:37:54.606679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:19.874 [2024-07-11 21:37:54.606696] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:19.874 [2024-07-11 21:37:54.606714] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:19.874 [2024-07-11 21:37:54.606745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:19.874 [2024-07-11 21:37:54.616447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:19.874 [2024-07-11 21:37:54.616627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.874 [2024-07-11 21:37:54.616659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:19.874 [2024-07-11 21:37:54.616678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:19.874 [2024-07-11 21:37:54.616704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:19.874 [2024-07-11 21:37:54.616727] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:19.874 [2024-07-11 21:37:54.616750] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:19.874 [2024-07-11 21:37:54.616777] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:19.874 [2024-07-11 21:37:54.616814] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:19.874 [2024-07-11 21:37:54.626523] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:19.874 [2024-07-11 21:37:54.626728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.874 [2024-07-11 21:37:54.626768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:19.874 [2024-07-11 21:37:54.626789] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:19.874 [2024-07-11 21:37:54.626831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:19.874 [2024-07-11 21:37:54.626854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:19.874 [2024-07-11 21:37:54.626869] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:19.874 [2024-07-11 21:37:54.626883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:19.874 [2024-07-11 21:37:54.626903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:19.874 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:19.874 [2024-07-11 21:37:54.636600] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:19.874 [2024-07-11 21:37:54.636812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:19.874 [2024-07-11 21:37:54.636843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:19.874 [2024-07-11 21:37:54.636860] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:19.874 [2024-07-11 21:37:54.636883] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:19.874 [2024-07-11 21:37:54.636905] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:19.874 [2024-07-11 21:37:54.636920] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:19.874 [2024-07-11 21:37:54.636934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:19.874 [2024-07-11 21:37:54.636954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.132 [2024-07-11 21:37:54.646675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:20.132 [2024-07-11 21:37:54.646879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.132 [2024-07-11 21:37:54.646918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:20.132 [2024-07-11 21:37:54.646943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:20.132 [2024-07-11 21:37:54.646968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:20.132 [2024-07-11 21:37:54.646991] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:20.132 [2024-07-11 21:37:54.647007] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:20.132 [2024-07-11 21:37:54.647021] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:20.132 [2024-07-11 21:37:54.647041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.132 [2024-07-11 21:37:54.656776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:20.132 [2024-07-11 21:37:54.656962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.132 [2024-07-11 21:37:54.656991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:20.132 [2024-07-11 21:37:54.657013] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:20.132 [2024-07-11 21:37:54.657037] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:20.132 [2024-07-11 21:37:54.657059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:20.132 [2024-07-11 21:37:54.657074] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:20.132 [2024-07-11 21:37:54.657088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:20.132 [2024-07-11 21:37:54.657108] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.132 [2024-07-11 21:37:54.666849] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:31:20.132 [2024-07-11 21:37:54.666994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:20.132 [2024-07-11 21:37:54.667022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1cd9640 with addr=10.0.0.2, port=4420 00:31:20.132 [2024-07-11 21:37:54.667053] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cd9640 is same with the state(5) to be set 00:31:20.132 [2024-07-11 21:37:54.667076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cd9640 (9): Bad file descriptor 00:31:20.132 [2024-07-11 21:37:54.667097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:31:20.132 [2024-07-11 21:37:54.667111] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:31:20.132 [2024-07-11 21:37:54.667126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:31:20.132 [2024-07-11 21:37:54.667145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:20.132 [2024-07-11 21:37:54.672334] bdev_nvme.c:6770:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:31:20.132 [2024-07-11 21:37:54.672367] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:31:20.132 21:37:54 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:31:21.068 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.068 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:31:21.068 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:31:21.068 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:31:21.069 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:31:21.331 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:21.332 21:37:55 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.266 [2024-07-11 21:37:56.963549] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:31:22.266 [2024-07-11 21:37:56.963576] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:31:22.266 [2024-07-11 21:37:56.963600] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:31:22.523 [2024-07-11 21:37:57.050906] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:31:22.781 [2024-07-11 21:37:57.360823] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:31:22.781 [2024-07-11 21:37:57.360856] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.781 request: 00:31:22.781 { 00:31:22.781 "name": "nvme", 00:31:22.781 "trtype": "tcp", 00:31:22.781 "traddr": "10.0.0.2", 00:31:22.781 "adrfam": "ipv4", 00:31:22.781 "trsvcid": "8009", 00:31:22.781 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:22.781 "wait_for_attach": true, 00:31:22.781 "method": "bdev_nvme_start_discovery", 00:31:22.781 "req_id": 1 00:31:22.781 } 00:31:22.781 Got JSON-RPC error response 00:31:22.781 response: 00:31:22.781 { 00:31:22.781 "code": -17, 00:31:22.781 "message": "File exists" 00:31:22.781 } 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.781 request: 00:31:22.781 { 00:31:22.781 "name": "nvme_second", 00:31:22.781 "trtype": "tcp", 00:31:22.781 "traddr": "10.0.0.2", 00:31:22.781 "adrfam": "ipv4", 00:31:22.781 "trsvcid": "8009", 00:31:22.781 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:22.781 "wait_for_attach": true, 00:31:22.781 "method": "bdev_nvme_start_discovery", 00:31:22.781 "req_id": 1 00:31:22.781 } 00:31:22.781 Got JSON-RPC error response 00:31:22.781 response: 00:31:22.781 { 00:31:22.781 "code": -17, 00:31:22.781 "message": "File exists" 00:31:22.781 } 00:31:22.781 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:22.782 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:31:23.039 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:23.039 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:31:23.039 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:31:23.039 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:31:23.039 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:23.039 21:37:57 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:23.971 [2024-07-11 21:37:58.560262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:23.971 [2024-07-11 21:37:58.560316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d17b00 with addr=10.0.0.2, port=8010 00:31:23.971 [2024-07-11 21:37:58.560344] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:23.971 [2024-07-11 21:37:58.560361] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:23.971 [2024-07-11 21:37:58.560375] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:24.905 [2024-07-11 21:37:59.562732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:31:24.905 [2024-07-11 21:37:59.562809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d17b00 with addr=10.0.0.2, port=8010 00:31:24.905 [2024-07-11 21:37:59.562845] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:31:24.905 [2024-07-11 21:37:59.562860] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:31:24.905 [2024-07-11 21:37:59.562882] bdev_nvme.c:7045:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:31:25.839 [2024-07-11 21:38:00.564910] bdev_nvme.c:7026:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:31:25.839 request: 00:31:25.839 { 00:31:25.839 "name": "nvme_second", 00:31:25.839 "trtype": "tcp", 00:31:25.839 "traddr": "10.0.0.2", 00:31:25.839 "adrfam": "ipv4", 00:31:25.839 "trsvcid": "8010", 00:31:25.839 "hostnqn": "nqn.2021-12.io.spdk:test", 00:31:25.839 "wait_for_attach": false, 00:31:25.839 "attach_timeout_ms": 3000, 00:31:25.839 "method": "bdev_nvme_start_discovery", 00:31:25.839 "req_id": 1 00:31:25.839 } 00:31:25.839 Got JSON-RPC error response 00:31:25.839 response: 00:31:25.839 { 00:31:25.839 "code": -110, 00:31:25.839 "message": "Connection timed out" 00:31:25.839 } 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:31:25.839 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 1026791 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:31:26.097 rmmod nvme_tcp 00:31:26.097 rmmod nvme_fabrics 00:31:26.097 rmmod nvme_keyring 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 1026763 ']' 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 1026763 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 1026763 ']' 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 1026763 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1026763 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1026763' 00:31:26.097 killing process with pid 1026763 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 1026763 00:31:26.097 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 1026763 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:26.356 21:38:00 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.257 21:38:02 nvmf_tcp.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:31:28.257 00:31:28.257 real 0m14.193s 00:31:28.257 user 0m21.210s 00:31:28.257 sys 0m2.791s 00:31:28.257 21:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:31:28.257 21:38:02 nvmf_tcp.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:31:28.257 ************************************ 00:31:28.257 END TEST nvmf_host_discovery 00:31:28.257 ************************************ 00:31:28.257 21:38:02 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:31:28.257 21:38:02 nvmf_tcp -- nvmf/nvmf.sh@102 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:28.257 21:38:02 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:31:28.257 21:38:02 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:31:28.257 21:38:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:28.257 ************************************ 00:31:28.257 START TEST nvmf_host_multipath_status 00:31:28.257 ************************************ 00:31:28.257 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:31:28.515 * Looking for test storage... 00:31:28.515 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:28.515 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:31:28.516 21:38:03 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:31:30.419 21:38:04 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:31:30.419 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:31:30.419 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:31:30.419 Found net devices under 0000:0a:00.0: cvl_0_0 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:31:30.419 Found net devices under 0000:0a:00.1: cvl_0_1 00:31:30.419 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:31:30.420 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:30.420 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:31:30.420 00:31:30.420 --- 10.0.0.2 ping statistics --- 00:31:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.420 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:30.420 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:30.420 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:31:30.420 00:31:30.420 --- 10.0.0.1 ping statistics --- 00:31:30.420 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:30.420 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=1030067 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 1030067 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1030067 ']' 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:30.420 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:30.420 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:30.679 [2024-07-11 21:38:05.223434] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:31:30.679 [2024-07-11 21:38:05.223521] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:30.679 EAL: No free 2048 kB hugepages reported on node 1 00:31:30.679 [2024-07-11 21:38:05.291126] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:31:30.679 [2024-07-11 21:38:05.382861] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:30.679 [2024-07-11 21:38:05.382919] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:30.679 [2024-07-11 21:38:05.382942] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:30.679 [2024-07-11 21:38:05.382953] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:30.679 [2024-07-11 21:38:05.382963] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:30.679 [2024-07-11 21:38:05.383029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:31:30.679 [2024-07-11 21:38:05.383035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=1030067 00:31:30.944 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:31.203 [2024-07-11 21:38:05.734008] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:31.203 21:38:05 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:31:31.461 Malloc0 00:31:31.461 21:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:31:31.718 21:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:31.976 21:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:31.976 [2024-07-11 21:38:06.732101] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:32.234 21:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:32.234 [2024-07-11 21:38:06.980765] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:32.234 21:38:06 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=1030232 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 1030232 /var/tmp/bdevperf.sock 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 1030232 ']' 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:32.234 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:31:32.234 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:31:32.800 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:31:32.800 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:31:32.800 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:31:32.800 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:31:33.372 Nvme0n1 00:31:33.372 21:38:07 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:31:33.698 Nvme0n1 00:31:33.698 21:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:31:33.698 21:38:08 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:31:36.227 21:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:31:36.227 21:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:36.227 21:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:36.227 21:38:10 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:31:37.161 21:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:31:37.161 21:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:37.161 21:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.161 21:38:11 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:37.419 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.419 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:37.419 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.419 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:37.677 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:37.677 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:37.677 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.677 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:37.935 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:37.935 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:37.935 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:37.935 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:38.193 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.193 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:38.193 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.193 21:38:12 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:38.462 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.462 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:38.462 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:38.462 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:38.722 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:38.722 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:31:38.722 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:38.979 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:39.237 21:38:13 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:31:40.170 21:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:31:40.170 21:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:40.170 21:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.170 21:38:14 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:40.428 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:40.428 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:40.428 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.428 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:40.686 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.686 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:40.686 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.687 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:40.945 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:40.945 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:40.945 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:40.945 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:41.203 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.203 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:41.203 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.203 21:38:15 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:41.461 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.461 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:41.461 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:41.461 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:41.719 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:41.719 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:31:41.720 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:41.978 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:31:42.235 21:38:16 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:31:43.602 21:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:31:43.602 21:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:43.602 21:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.602 21:38:17 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:43.602 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:43.602 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:43.602 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.602 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:43.859 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:43.859 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:43.859 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:43.859 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:44.116 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.116 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:44.116 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.116 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:44.374 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.374 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:44.374 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.374 21:38:18 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:44.663 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.663 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:44.663 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:44.663 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:44.921 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:44.921 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:31:44.921 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:45.179 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:45.437 21:38:19 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:31:46.371 21:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:31:46.371 21:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:46.371 21:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.371 21:38:20 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:46.629 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:46.629 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:46.629 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.629 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:46.887 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:46.887 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:46.887 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:46.887 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:47.145 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.145 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:47.145 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.145 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:47.403 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.403 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:47.403 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.403 21:38:21 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:47.660 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:47.660 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:47.660 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:47.660 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:47.918 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:47.918 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:31:47.918 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:48.176 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:31:48.434 21:38:22 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:31:49.363 21:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:31:49.363 21:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:49.363 21:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.363 21:38:23 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:49.620 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.620 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:31:49.620 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.620 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:49.877 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:49.877 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:49.877 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:49.877 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:50.135 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.135 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:50.135 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.135 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:50.394 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:50.394 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:50.394 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.394 21:38:24 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:50.695 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.695 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:31:50.695 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:50.695 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:50.953 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:50.953 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:31:50.953 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:31:51.211 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:51.469 21:38:25 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:31:52.402 21:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:31:52.402 21:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:52.402 21:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.402 21:38:26 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:52.662 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:52.662 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:52.662 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.662 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:52.920 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:52.920 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:52.920 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:52.920 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:53.177 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.177 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:53.177 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.177 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:53.434 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.434 21:38:27 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:31:53.434 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.434 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:53.692 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:53.692 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:53.692 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:53.692 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:53.950 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:53.950 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:31:54.206 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:31:54.206 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:31:54.463 21:38:28 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:54.721 21:38:29 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:31:55.653 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:31:55.653 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:31:55.653 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.653 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:55.910 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:55.910 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:55.910 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:55.910 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:56.168 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.168 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:56.168 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.168 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:56.427 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.427 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:56.427 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.427 21:38:30 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:56.684 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.685 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:56.685 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.685 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:56.942 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:56.942 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:56.942 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:56.942 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:31:57.200 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:57.200 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:31:57.200 21:38:31 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:31:57.457 21:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:31:57.716 21:38:32 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:31:58.649 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:31:58.649 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:31:58.649 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.649 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:31:58.904 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:31:58.905 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:31:58.905 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:58.905 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:31:59.161 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.161 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:31:59.161 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.161 21:38:33 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:31:59.418 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.418 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:31:59.418 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.418 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:31:59.674 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.674 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:31:59.674 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.674 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:31:59.931 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:31:59.931 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:31:59.931 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:31:59.931 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:00.188 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:00.188 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:32:00.188 21:38:34 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:00.445 21:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:32:00.704 21:38:35 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:32:01.637 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:32:01.637 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:01.637 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.637 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:01.895 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:01.895 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:32:01.895 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:01.895 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:02.154 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.154 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:02.154 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.154 21:38:36 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:02.412 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.412 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:02.412 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.412 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:02.670 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.670 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:02.670 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.670 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:02.929 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:02.929 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:32:02.929 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:02.929 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:03.187 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:03.187 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:32:03.187 21:38:37 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:32:03.445 21:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:32:03.704 21:38:38 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:32:04.638 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:32:04.638 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:32:04.638 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.638 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:32:04.897 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:04.897 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:32:04.897 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:04.897 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:32:05.155 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:05.155 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:32:05.155 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.155 21:38:39 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:32:05.412 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.412 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:32:05.413 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.413 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:32:05.672 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.672 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:32:05.672 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.672 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:32:05.930 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:32:05.930 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:32:05.930 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:32:05.930 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 1030232 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1030232 ']' 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1030232 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030232 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030232' 00:32:06.208 killing process with pid 1030232 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1030232 00:32:06.208 21:38:40 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1030232 00:32:06.208 Connection closed with partial response: 00:32:06.208 00:32:06.208 00:32:06.475 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 1030232 00:32:06.475 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:06.475 [2024-07-11 21:38:07.041143] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:06.475 [2024-07-11 21:38:07.041240] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1030232 ] 00:32:06.475 EAL: No free 2048 kB hugepages reported on node 1 00:32:06.475 [2024-07-11 21:38:07.101553] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:06.475 [2024-07-11 21:38:07.189457] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:32:06.475 Running I/O for 90 seconds... 00:32:06.475 [2024-07-11 21:38:22.698203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:49968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.698259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.698889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:49976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.698917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.698947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:49984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.698965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.698989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:49992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:50000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:50008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:50016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:50024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:50032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:50040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:50048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:50056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:50064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:50072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:50080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:50088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:50096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:50104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:50112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:50120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:50128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:50136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:50144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:50152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:50160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:50168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:50176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.475 [2024-07-11 21:38:22.699926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:06.475 [2024-07-11 21:38:22.699948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:50184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.699964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.699986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:50192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:50200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:50208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:50216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:50224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:50232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:50240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:50248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:50256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:50264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:50272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:50280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:50288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:50296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:50304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:50312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:50320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:50328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:50336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:50344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:50352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:50360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:50368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:50376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.700971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.700993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:50384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:50392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:50400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:50408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:50416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:50424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:50432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:50440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:50448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:50456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:50464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:50472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:32:06.476 [2024-07-11 21:38:22.701475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:50480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.476 [2024-07-11 21:38:22.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:50488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:50496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:50504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:50512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:50520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:50528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:50536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.701958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.701990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:49832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.702007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:49840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.702066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:50544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:50552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:50560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:50568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:50576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:50584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:50592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:50600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:50608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:50616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:50624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:50632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:50640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:50648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:50656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:50664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:50672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:50680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:50688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.477 [2024-07-11 21:38:22.702905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:49848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.702948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.702975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:49856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.702991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:49864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:49872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:49880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:49888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:49896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:49904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:49912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:32:06.477 [2024-07-11 21:38:22.703334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:49920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.477 [2024-07-11 21:38:22.703350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:49928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:22.703392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:49936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:22.703434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:49944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:22.703475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:49952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:22.703517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:49960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:22.703559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:50696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:50704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:50712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:50720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:50728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:50736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:50744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:50752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.703964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:50760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.703980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:50768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:50776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:50784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:50792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:50800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:50808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:50816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:50824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:50832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:50840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:22.704450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:50848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:22.704466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:87808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:87840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:87880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:87912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:87944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:88512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:38.280804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:88528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:38.280844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:88544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.478 [2024-07-11 21:38:38.280885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:87960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:88104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.280976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.280997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:88136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.281014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.281037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:88168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.281069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.281090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:88200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.281121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.281142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:88232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.281157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.281178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:88264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.478 [2024-07-11 21:38:38.281194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:06.478 [2024-07-11 21:38:38.281214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:88296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.281229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:88008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:88040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:88072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:88096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:88128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:88160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:88192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:88568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:88336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:88368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:88400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:88432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:88584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:88600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:88616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:88632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:88648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:88664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.282904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:88208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.282963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:88240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.282986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.284121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:88272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.284146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.284188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:88304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:06.479 [2024-07-11 21:38:38.284207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.284230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:88688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.284248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.284270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:88704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.284286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:32:06.479 [2024-07-11 21:38:38.284309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:88720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.479 [2024-07-11 21:38:38.284325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:32:06.480 [2024-07-11 21:38:38.284348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:88736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:06.480 [2024-07-11 21:38:38.284369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:32:06.480 Received shutdown signal, test time was about 32.238923 seconds 00:32:06.480 00:32:06.480 Latency(us) 00:32:06.480 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:06.480 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:32:06.480 Verification LBA range: start 0x0 length 0x4000 00:32:06.480 Nvme0n1 : 32.24 8136.17 31.78 0.00 0.00 15706.15 276.10 4026531.84 00:32:06.480 =================================================================================================================== 00:32:06.480 Total : 8136.17 31.78 0.00 0.00 15706.15 276.10 4026531.84 00:32:06.480 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:06.756 rmmod nvme_tcp 00:32:06.756 rmmod nvme_fabrics 00:32:06.756 rmmod nvme_keyring 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 1030067 ']' 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 1030067 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 1030067 ']' 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 1030067 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1030067 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:06.756 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1030067' 00:32:06.756 killing process with pid 1030067 00:32:06.757 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 1030067 00:32:06.757 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 1030067 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:07.015 21:38:41 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:08.925 21:38:43 nvmf_tcp.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:08.925 00:32:08.925 real 0m40.648s 00:32:08.925 user 1m59.705s 00:32:08.925 sys 0m11.478s 00:32:08.925 21:38:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:08.925 21:38:43 nvmf_tcp.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:32:08.925 ************************************ 00:32:08.925 END TEST nvmf_host_multipath_status 00:32:08.925 ************************************ 00:32:09.184 21:38:43 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:09.184 21:38:43 nvmf_tcp -- nvmf/nvmf.sh@103 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:09.184 21:38:43 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:09.184 21:38:43 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:09.184 21:38:43 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:09.184 ************************************ 00:32:09.184 START TEST nvmf_discovery_remove_ifc 00:32:09.184 ************************************ 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:32:09.184 * Looking for test storage... 00:32:09.184 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:32:09.184 21:38:43 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:11.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:11.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.123 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:11.124 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:11.124 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:11.124 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:11.124 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:32:11.124 00:32:11.124 --- 10.0.0.2 ping statistics --- 00:32:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.124 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:11.124 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:11.124 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:32:11.124 00:32:11.124 --- 10.0.0.1 ping statistics --- 00:32:11.124 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:11.124 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:11.124 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=1036420 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 1036420 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1036420 ']' 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:11.382 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:11.382 21:38:45 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.382 [2024-07-11 21:38:45.940240] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:11.382 [2024-07-11 21:38:45.940311] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:11.382 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.382 [2024-07-11 21:38:46.007902] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.382 [2024-07-11 21:38:46.097178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:11.382 [2024-07-11 21:38:46.097244] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:11.382 [2024-07-11 21:38:46.097271] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:11.382 [2024-07-11 21:38:46.097284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:11.382 [2024-07-11 21:38:46.097296] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:11.382 [2024-07-11 21:38:46.097324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.640 [2024-07-11 21:38:46.250346] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:11.640 [2024-07-11 21:38:46.258542] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:32:11.640 null0 00:32:11.640 [2024-07-11 21:38:46.290480] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=1036441 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 1036441 /tmp/host.sock 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 1036441 ']' 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:32:11.640 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:11.640 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.640 [2024-07-11 21:38:46.355403] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:32:11.640 [2024-07-11 21:38:46.355469] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1036441 ] 00:32:11.640 EAL: No free 2048 kB hugepages reported on node 1 00:32:11.899 [2024-07-11 21:38:46.417105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:11.899 [2024-07-11 21:38:46.507277] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:11.899 21:38:46 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.275 [2024-07-11 21:38:47.728912] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:13.275 [2024-07-11 21:38:47.728938] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:13.275 [2024-07-11 21:38:47.728961] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:13.275 [2024-07-11 21:38:47.815277] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:32:13.275 [2024-07-11 21:38:48.041573] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:13.275 [2024-07-11 21:38:48.041642] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:13.275 [2024-07-11 21:38:48.041684] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:13.275 [2024-07-11 21:38:48.041709] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:32:13.275 [2024-07-11 21:38:48.041735] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.275 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.532 [2024-07-11 21:38:48.046750] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a48300 was disconnected and freed. delete nvme_qpair. 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:13.532 21:38:48 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:14.463 21:38:49 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:15.833 21:38:50 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:16.762 21:38:51 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:17.691 21:38:52 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:18.621 21:38:53 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:18.879 [2024-07-11 21:38:53.482581] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:32:18.879 [2024-07-11 21:38:53.482648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.879 [2024-07-11 21:38:53.482673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.879 [2024-07-11 21:38:53.482693] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.879 [2024-07-11 21:38:53.482708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.879 [2024-07-11 21:38:53.482723] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.879 [2024-07-11 21:38:53.482738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.879 [2024-07-11 21:38:53.482759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.879 [2024-07-11 21:38:53.482777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.879 [2024-07-11 21:38:53.482800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:18.879 [2024-07-11 21:38:53.482831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:18.879 [2024-07-11 21:38:53.482844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ece0 is same with the state(5) to be set 00:32:18.879 [2024-07-11 21:38:53.492599] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0ece0 (9): Bad file descriptor 00:32:18.879 [2024-07-11 21:38:53.502650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:19.809 [2024-07-11 21:38:54.560780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:32:19.809 [2024-07-11 21:38:54.560829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a0ece0 with addr=10.0.0.2, port=4420 00:32:19.809 [2024-07-11 21:38:54.560852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a0ece0 is same with the state(5) to be set 00:32:19.809 [2024-07-11 21:38:54.560884] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0ece0 (9): Bad file descriptor 00:32:19.809 [2024-07-11 21:38:54.561286] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:32:19.809 [2024-07-11 21:38:54.561321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:19.809 [2024-07-11 21:38:54.561338] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:19.809 [2024-07-11 21:38:54.561359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:19.809 [2024-07-11 21:38:54.561385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:19.809 [2024-07-11 21:38:54.561404] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:32:19.809 21:38:54 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:21.179 [2024-07-11 21:38:55.563896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:32:21.179 [2024-07-11 21:38:55.563929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:32:21.179 [2024-07-11 21:38:55.563944] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:32:21.179 [2024-07-11 21:38:55.563957] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:32:21.179 [2024-07-11 21:38:55.563979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:32:21.179 [2024-07-11 21:38:55.564016] bdev_nvme.c:6734:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:32:21.179 [2024-07-11 21:38:55.564067] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.179 [2024-07-11 21:38:55.564109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.179 [2024-07-11 21:38:55.564133] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.179 [2024-07-11 21:38:55.564148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.179 [2024-07-11 21:38:55.564166] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.179 [2024-07-11 21:38:55.564180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.179 [2024-07-11 21:38:55.564197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.179 [2024-07-11 21:38:55.564211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.179 [2024-07-11 21:38:55.564227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:32:21.179 [2024-07-11 21:38:55.564243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:21.179 [2024-07-11 21:38:55.564258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:32:21.179 [2024-07-11 21:38:55.564504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a0e160 (9): Bad file descriptor 00:32:21.179 [2024-07-11 21:38:55.565527] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:32:21.179 [2024-07-11 21:38:55.565553] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:21.179 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:21.180 21:38:55 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:22.111 21:38:56 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.043 [2024-07-11 21:38:57.624529] bdev_nvme.c:6983:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:32:23.043 [2024-07-11 21:38:57.624559] bdev_nvme.c:7063:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:32:23.043 [2024-07-11 21:38:57.624585] bdev_nvme.c:6946:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:32:23.043 [2024-07-11 21:38:57.753020] bdev_nvme.c:6912:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:32:23.043 21:38:57 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:32:23.301 [2024-07-11 21:38:57.937222] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:32:23.301 [2024-07-11 21:38:57.937282] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:32:23.301 [2024-07-11 21:38:57.937313] bdev_nvme.c:7773:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:32:23.301 [2024-07-11 21:38:57.937335] bdev_nvme.c:6802:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:32:23.301 [2024-07-11 21:38:57.937348] bdev_nvme.c:6761:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:32:23.301 [2024-07-11 21:38:57.943043] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19fd920 was disconnected and freed. delete nvme_qpair. 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:24.235 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 1036441 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1036441 ']' 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1036441 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1036441 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1036441' 00:32:24.236 killing process with pid 1036441 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1036441 00:32:24.236 21:38:58 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1036441 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:24.494 rmmod nvme_tcp 00:32:24.494 rmmod nvme_fabrics 00:32:24.494 rmmod nvme_keyring 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 1036420 ']' 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 1036420 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 1036420 ']' 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 1036420 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1036420 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1036420' 00:32:24.494 killing process with pid 1036420 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 1036420 00:32:24.494 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 1036420 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:24.776 21:38:59 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.311 21:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:27.311 00:32:27.311 real 0m17.739s 00:32:27.311 user 0m25.726s 00:32:27.311 sys 0m3.036s 00:32:27.311 21:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:27.311 21:39:01 nvmf_tcp.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:32:27.311 ************************************ 00:32:27.311 END TEST nvmf_discovery_remove_ifc 00:32:27.311 ************************************ 00:32:27.311 21:39:01 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:27.311 21:39:01 nvmf_tcp -- nvmf/nvmf.sh@104 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:27.311 21:39:01 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:27.311 21:39:01 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:27.311 21:39:01 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:27.311 ************************************ 00:32:27.311 START TEST nvmf_identify_kernel_target 00:32:27.311 ************************************ 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:32:27.311 * Looking for test storage... 00:32:27.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:27.311 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:27.312 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:27.312 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:27.312 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:27.312 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:32:27.312 21:39:01 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:29.213 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:29.213 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:29.213 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:29.213 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:29.214 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:29.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:29.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:32:29.214 00:32:29.214 --- 10.0.0.2 ping statistics --- 00:32:29.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.214 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:29.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:29.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:32:29.214 00:32:29.214 --- 10.0.0.1 ping statistics --- 00:32:29.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:29.214 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:29.214 21:39:03 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:30.149 Waiting for block devices as requested 00:32:30.149 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:30.149 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:30.407 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:30.407 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:30.407 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:30.666 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:30.666 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:30.666 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:30.666 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:30.924 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:30.924 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:30.924 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:31.183 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:31.183 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:31.183 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:31.183 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:31.442 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:31.442 No valid GPT data, bailing 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:32:31.442 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:31.702 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:31.702 00:32:31.702 Discovery Log Number of Records 2, Generation counter 2 00:32:31.702 =====Discovery Log Entry 0====== 00:32:31.702 trtype: tcp 00:32:31.702 adrfam: ipv4 00:32:31.702 subtype: current discovery subsystem 00:32:31.702 treq: not specified, sq flow control disable supported 00:32:31.702 portid: 1 00:32:31.702 trsvcid: 4420 00:32:31.702 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:31.702 traddr: 10.0.0.1 00:32:31.702 eflags: none 00:32:31.702 sectype: none 00:32:31.702 =====Discovery Log Entry 1====== 00:32:31.702 trtype: tcp 00:32:31.702 adrfam: ipv4 00:32:31.702 subtype: nvme subsystem 00:32:31.702 treq: not specified, sq flow control disable supported 00:32:31.702 portid: 1 00:32:31.702 trsvcid: 4420 00:32:31.702 subnqn: nqn.2016-06.io.spdk:testnqn 00:32:31.702 traddr: 10.0.0.1 00:32:31.702 eflags: none 00:32:31.702 sectype: none 00:32:31.702 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:32:31.702 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:32:31.702 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.702 ===================================================== 00:32:31.702 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:32:31.702 ===================================================== 00:32:31.702 Controller Capabilities/Features 00:32:31.702 ================================ 00:32:31.702 Vendor ID: 0000 00:32:31.702 Subsystem Vendor ID: 0000 00:32:31.702 Serial Number: 02b32148a02a5bfed090 00:32:31.702 Model Number: Linux 00:32:31.702 Firmware Version: 6.7.0-68 00:32:31.702 Recommended Arb Burst: 0 00:32:31.702 IEEE OUI Identifier: 00 00 00 00:32:31.702 Multi-path I/O 00:32:31.702 May have multiple subsystem ports: No 00:32:31.702 May have multiple controllers: No 00:32:31.702 Associated with SR-IOV VF: No 00:32:31.702 Max Data Transfer Size: Unlimited 00:32:31.702 Max Number of Namespaces: 0 00:32:31.702 Max Number of I/O Queues: 1024 00:32:31.702 NVMe Specification Version (VS): 1.3 00:32:31.702 NVMe Specification Version (Identify): 1.3 00:32:31.702 Maximum Queue Entries: 1024 00:32:31.702 Contiguous Queues Required: No 00:32:31.702 Arbitration Mechanisms Supported 00:32:31.702 Weighted Round Robin: Not Supported 00:32:31.702 Vendor Specific: Not Supported 00:32:31.702 Reset Timeout: 7500 ms 00:32:31.702 Doorbell Stride: 4 bytes 00:32:31.702 NVM Subsystem Reset: Not Supported 00:32:31.702 Command Sets Supported 00:32:31.702 NVM Command Set: Supported 00:32:31.702 Boot Partition: Not Supported 00:32:31.702 Memory Page Size Minimum: 4096 bytes 00:32:31.702 Memory Page Size Maximum: 4096 bytes 00:32:31.702 Persistent Memory Region: Not Supported 00:32:31.702 Optional Asynchronous Events Supported 00:32:31.702 Namespace Attribute Notices: Not Supported 00:32:31.702 Firmware Activation Notices: Not Supported 00:32:31.702 ANA Change Notices: Not Supported 00:32:31.702 PLE Aggregate Log Change Notices: Not Supported 00:32:31.702 LBA Status Info Alert Notices: Not Supported 00:32:31.702 EGE Aggregate Log Change Notices: Not Supported 00:32:31.702 Normal NVM Subsystem Shutdown event: Not Supported 00:32:31.702 Zone Descriptor Change Notices: Not Supported 00:32:31.702 Discovery Log Change Notices: Supported 00:32:31.702 Controller Attributes 00:32:31.702 128-bit Host Identifier: Not Supported 00:32:31.702 Non-Operational Permissive Mode: Not Supported 00:32:31.702 NVM Sets: Not Supported 00:32:31.702 Read Recovery Levels: Not Supported 00:32:31.702 Endurance Groups: Not Supported 00:32:31.702 Predictable Latency Mode: Not Supported 00:32:31.702 Traffic Based Keep ALive: Not Supported 00:32:31.702 Namespace Granularity: Not Supported 00:32:31.702 SQ Associations: Not Supported 00:32:31.702 UUID List: Not Supported 00:32:31.702 Multi-Domain Subsystem: Not Supported 00:32:31.702 Fixed Capacity Management: Not Supported 00:32:31.702 Variable Capacity Management: Not Supported 00:32:31.702 Delete Endurance Group: Not Supported 00:32:31.702 Delete NVM Set: Not Supported 00:32:31.702 Extended LBA Formats Supported: Not Supported 00:32:31.702 Flexible Data Placement Supported: Not Supported 00:32:31.702 00:32:31.702 Controller Memory Buffer Support 00:32:31.702 ================================ 00:32:31.702 Supported: No 00:32:31.702 00:32:31.702 Persistent Memory Region Support 00:32:31.702 ================================ 00:32:31.702 Supported: No 00:32:31.702 00:32:31.703 Admin Command Set Attributes 00:32:31.703 ============================ 00:32:31.703 Security Send/Receive: Not Supported 00:32:31.703 Format NVM: Not Supported 00:32:31.703 Firmware Activate/Download: Not Supported 00:32:31.703 Namespace Management: Not Supported 00:32:31.703 Device Self-Test: Not Supported 00:32:31.703 Directives: Not Supported 00:32:31.703 NVMe-MI: Not Supported 00:32:31.703 Virtualization Management: Not Supported 00:32:31.703 Doorbell Buffer Config: Not Supported 00:32:31.703 Get LBA Status Capability: Not Supported 00:32:31.703 Command & Feature Lockdown Capability: Not Supported 00:32:31.703 Abort Command Limit: 1 00:32:31.703 Async Event Request Limit: 1 00:32:31.703 Number of Firmware Slots: N/A 00:32:31.703 Firmware Slot 1 Read-Only: N/A 00:32:31.703 Firmware Activation Without Reset: N/A 00:32:31.703 Multiple Update Detection Support: N/A 00:32:31.703 Firmware Update Granularity: No Information Provided 00:32:31.703 Per-Namespace SMART Log: No 00:32:31.703 Asymmetric Namespace Access Log Page: Not Supported 00:32:31.703 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:32:31.703 Command Effects Log Page: Not Supported 00:32:31.703 Get Log Page Extended Data: Supported 00:32:31.703 Telemetry Log Pages: Not Supported 00:32:31.703 Persistent Event Log Pages: Not Supported 00:32:31.703 Supported Log Pages Log Page: May Support 00:32:31.703 Commands Supported & Effects Log Page: Not Supported 00:32:31.703 Feature Identifiers & Effects Log Page:May Support 00:32:31.703 NVMe-MI Commands & Effects Log Page: May Support 00:32:31.703 Data Area 4 for Telemetry Log: Not Supported 00:32:31.703 Error Log Page Entries Supported: 1 00:32:31.703 Keep Alive: Not Supported 00:32:31.703 00:32:31.703 NVM Command Set Attributes 00:32:31.703 ========================== 00:32:31.703 Submission Queue Entry Size 00:32:31.703 Max: 1 00:32:31.703 Min: 1 00:32:31.703 Completion Queue Entry Size 00:32:31.703 Max: 1 00:32:31.703 Min: 1 00:32:31.703 Number of Namespaces: 0 00:32:31.703 Compare Command: Not Supported 00:32:31.703 Write Uncorrectable Command: Not Supported 00:32:31.703 Dataset Management Command: Not Supported 00:32:31.703 Write Zeroes Command: Not Supported 00:32:31.703 Set Features Save Field: Not Supported 00:32:31.703 Reservations: Not Supported 00:32:31.703 Timestamp: Not Supported 00:32:31.703 Copy: Not Supported 00:32:31.703 Volatile Write Cache: Not Present 00:32:31.703 Atomic Write Unit (Normal): 1 00:32:31.703 Atomic Write Unit (PFail): 1 00:32:31.703 Atomic Compare & Write Unit: 1 00:32:31.703 Fused Compare & Write: Not Supported 00:32:31.703 Scatter-Gather List 00:32:31.703 SGL Command Set: Supported 00:32:31.703 SGL Keyed: Not Supported 00:32:31.703 SGL Bit Bucket Descriptor: Not Supported 00:32:31.703 SGL Metadata Pointer: Not Supported 00:32:31.703 Oversized SGL: Not Supported 00:32:31.703 SGL Metadata Address: Not Supported 00:32:31.703 SGL Offset: Supported 00:32:31.703 Transport SGL Data Block: Not Supported 00:32:31.703 Replay Protected Memory Block: Not Supported 00:32:31.703 00:32:31.703 Firmware Slot Information 00:32:31.703 ========================= 00:32:31.703 Active slot: 0 00:32:31.703 00:32:31.703 00:32:31.703 Error Log 00:32:31.703 ========= 00:32:31.703 00:32:31.703 Active Namespaces 00:32:31.703 ================= 00:32:31.703 Discovery Log Page 00:32:31.703 ================== 00:32:31.703 Generation Counter: 2 00:32:31.703 Number of Records: 2 00:32:31.703 Record Format: 0 00:32:31.703 00:32:31.703 Discovery Log Entry 0 00:32:31.703 ---------------------- 00:32:31.703 Transport Type: 3 (TCP) 00:32:31.703 Address Family: 1 (IPv4) 00:32:31.703 Subsystem Type: 3 (Current Discovery Subsystem) 00:32:31.703 Entry Flags: 00:32:31.703 Duplicate Returned Information: 0 00:32:31.703 Explicit Persistent Connection Support for Discovery: 0 00:32:31.703 Transport Requirements: 00:32:31.703 Secure Channel: Not Specified 00:32:31.703 Port ID: 1 (0x0001) 00:32:31.703 Controller ID: 65535 (0xffff) 00:32:31.703 Admin Max SQ Size: 32 00:32:31.703 Transport Service Identifier: 4420 00:32:31.703 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:32:31.703 Transport Address: 10.0.0.1 00:32:31.703 Discovery Log Entry 1 00:32:31.703 ---------------------- 00:32:31.703 Transport Type: 3 (TCP) 00:32:31.703 Address Family: 1 (IPv4) 00:32:31.703 Subsystem Type: 2 (NVM Subsystem) 00:32:31.703 Entry Flags: 00:32:31.703 Duplicate Returned Information: 0 00:32:31.703 Explicit Persistent Connection Support for Discovery: 0 00:32:31.703 Transport Requirements: 00:32:31.703 Secure Channel: Not Specified 00:32:31.703 Port ID: 1 (0x0001) 00:32:31.703 Controller ID: 65535 (0xffff) 00:32:31.703 Admin Max SQ Size: 32 00:32:31.703 Transport Service Identifier: 4420 00:32:31.703 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:32:31.703 Transport Address: 10.0.0.1 00:32:31.703 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:32:31.703 EAL: No free 2048 kB hugepages reported on node 1 00:32:31.703 get_feature(0x01) failed 00:32:31.703 get_feature(0x02) failed 00:32:31.703 get_feature(0x04) failed 00:32:31.703 ===================================================== 00:32:31.703 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:32:31.703 ===================================================== 00:32:31.703 Controller Capabilities/Features 00:32:31.703 ================================ 00:32:31.703 Vendor ID: 0000 00:32:31.703 Subsystem Vendor ID: 0000 00:32:31.703 Serial Number: 8048f83bd1a6cb6166a5 00:32:31.703 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:32:31.703 Firmware Version: 6.7.0-68 00:32:31.703 Recommended Arb Burst: 6 00:32:31.703 IEEE OUI Identifier: 00 00 00 00:32:31.703 Multi-path I/O 00:32:31.703 May have multiple subsystem ports: Yes 00:32:31.703 May have multiple controllers: Yes 00:32:31.703 Associated with SR-IOV VF: No 00:32:31.703 Max Data Transfer Size: Unlimited 00:32:31.703 Max Number of Namespaces: 1024 00:32:31.703 Max Number of I/O Queues: 128 00:32:31.703 NVMe Specification Version (VS): 1.3 00:32:31.703 NVMe Specification Version (Identify): 1.3 00:32:31.703 Maximum Queue Entries: 1024 00:32:31.703 Contiguous Queues Required: No 00:32:31.703 Arbitration Mechanisms Supported 00:32:31.703 Weighted Round Robin: Not Supported 00:32:31.703 Vendor Specific: Not Supported 00:32:31.703 Reset Timeout: 7500 ms 00:32:31.703 Doorbell Stride: 4 bytes 00:32:31.703 NVM Subsystem Reset: Not Supported 00:32:31.703 Command Sets Supported 00:32:31.703 NVM Command Set: Supported 00:32:31.703 Boot Partition: Not Supported 00:32:31.703 Memory Page Size Minimum: 4096 bytes 00:32:31.703 Memory Page Size Maximum: 4096 bytes 00:32:31.703 Persistent Memory Region: Not Supported 00:32:31.703 Optional Asynchronous Events Supported 00:32:31.703 Namespace Attribute Notices: Supported 00:32:31.703 Firmware Activation Notices: Not Supported 00:32:31.703 ANA Change Notices: Supported 00:32:31.703 PLE Aggregate Log Change Notices: Not Supported 00:32:31.703 LBA Status Info Alert Notices: Not Supported 00:32:31.703 EGE Aggregate Log Change Notices: Not Supported 00:32:31.703 Normal NVM Subsystem Shutdown event: Not Supported 00:32:31.703 Zone Descriptor Change Notices: Not Supported 00:32:31.703 Discovery Log Change Notices: Not Supported 00:32:31.703 Controller Attributes 00:32:31.703 128-bit Host Identifier: Supported 00:32:31.703 Non-Operational Permissive Mode: Not Supported 00:32:31.703 NVM Sets: Not Supported 00:32:31.703 Read Recovery Levels: Not Supported 00:32:31.703 Endurance Groups: Not Supported 00:32:31.703 Predictable Latency Mode: Not Supported 00:32:31.703 Traffic Based Keep ALive: Supported 00:32:31.703 Namespace Granularity: Not Supported 00:32:31.703 SQ Associations: Not Supported 00:32:31.703 UUID List: Not Supported 00:32:31.703 Multi-Domain Subsystem: Not Supported 00:32:31.703 Fixed Capacity Management: Not Supported 00:32:31.703 Variable Capacity Management: Not Supported 00:32:31.703 Delete Endurance Group: Not Supported 00:32:31.703 Delete NVM Set: Not Supported 00:32:31.703 Extended LBA Formats Supported: Not Supported 00:32:31.703 Flexible Data Placement Supported: Not Supported 00:32:31.703 00:32:31.703 Controller Memory Buffer Support 00:32:31.703 ================================ 00:32:31.703 Supported: No 00:32:31.703 00:32:31.703 Persistent Memory Region Support 00:32:31.703 ================================ 00:32:31.703 Supported: No 00:32:31.703 00:32:31.703 Admin Command Set Attributes 00:32:31.703 ============================ 00:32:31.703 Security Send/Receive: Not Supported 00:32:31.703 Format NVM: Not Supported 00:32:31.703 Firmware Activate/Download: Not Supported 00:32:31.703 Namespace Management: Not Supported 00:32:31.703 Device Self-Test: Not Supported 00:32:31.703 Directives: Not Supported 00:32:31.703 NVMe-MI: Not Supported 00:32:31.703 Virtualization Management: Not Supported 00:32:31.703 Doorbell Buffer Config: Not Supported 00:32:31.704 Get LBA Status Capability: Not Supported 00:32:31.704 Command & Feature Lockdown Capability: Not Supported 00:32:31.704 Abort Command Limit: 4 00:32:31.704 Async Event Request Limit: 4 00:32:31.704 Number of Firmware Slots: N/A 00:32:31.704 Firmware Slot 1 Read-Only: N/A 00:32:31.704 Firmware Activation Without Reset: N/A 00:32:31.704 Multiple Update Detection Support: N/A 00:32:31.704 Firmware Update Granularity: No Information Provided 00:32:31.704 Per-Namespace SMART Log: Yes 00:32:31.704 Asymmetric Namespace Access Log Page: Supported 00:32:31.704 ANA Transition Time : 10 sec 00:32:31.704 00:32:31.704 Asymmetric Namespace Access Capabilities 00:32:31.704 ANA Optimized State : Supported 00:32:31.704 ANA Non-Optimized State : Supported 00:32:31.704 ANA Inaccessible State : Supported 00:32:31.704 ANA Persistent Loss State : Supported 00:32:31.704 ANA Change State : Supported 00:32:31.704 ANAGRPID is not changed : No 00:32:31.704 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:32:31.704 00:32:31.704 ANA Group Identifier Maximum : 128 00:32:31.704 Number of ANA Group Identifiers : 128 00:32:31.704 Max Number of Allowed Namespaces : 1024 00:32:31.704 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:32:31.704 Command Effects Log Page: Supported 00:32:31.704 Get Log Page Extended Data: Supported 00:32:31.704 Telemetry Log Pages: Not Supported 00:32:31.704 Persistent Event Log Pages: Not Supported 00:32:31.704 Supported Log Pages Log Page: May Support 00:32:31.704 Commands Supported & Effects Log Page: Not Supported 00:32:31.704 Feature Identifiers & Effects Log Page:May Support 00:32:31.704 NVMe-MI Commands & Effects Log Page: May Support 00:32:31.704 Data Area 4 for Telemetry Log: Not Supported 00:32:31.704 Error Log Page Entries Supported: 128 00:32:31.704 Keep Alive: Supported 00:32:31.704 Keep Alive Granularity: 1000 ms 00:32:31.704 00:32:31.704 NVM Command Set Attributes 00:32:31.704 ========================== 00:32:31.704 Submission Queue Entry Size 00:32:31.704 Max: 64 00:32:31.704 Min: 64 00:32:31.704 Completion Queue Entry Size 00:32:31.704 Max: 16 00:32:31.704 Min: 16 00:32:31.704 Number of Namespaces: 1024 00:32:31.704 Compare Command: Not Supported 00:32:31.704 Write Uncorrectable Command: Not Supported 00:32:31.704 Dataset Management Command: Supported 00:32:31.704 Write Zeroes Command: Supported 00:32:31.704 Set Features Save Field: Not Supported 00:32:31.704 Reservations: Not Supported 00:32:31.704 Timestamp: Not Supported 00:32:31.704 Copy: Not Supported 00:32:31.704 Volatile Write Cache: Present 00:32:31.704 Atomic Write Unit (Normal): 1 00:32:31.704 Atomic Write Unit (PFail): 1 00:32:31.704 Atomic Compare & Write Unit: 1 00:32:31.704 Fused Compare & Write: Not Supported 00:32:31.704 Scatter-Gather List 00:32:31.704 SGL Command Set: Supported 00:32:31.704 SGL Keyed: Not Supported 00:32:31.704 SGL Bit Bucket Descriptor: Not Supported 00:32:31.704 SGL Metadata Pointer: Not Supported 00:32:31.704 Oversized SGL: Not Supported 00:32:31.704 SGL Metadata Address: Not Supported 00:32:31.704 SGL Offset: Supported 00:32:31.704 Transport SGL Data Block: Not Supported 00:32:31.704 Replay Protected Memory Block: Not Supported 00:32:31.704 00:32:31.704 Firmware Slot Information 00:32:31.704 ========================= 00:32:31.704 Active slot: 0 00:32:31.704 00:32:31.704 Asymmetric Namespace Access 00:32:31.704 =========================== 00:32:31.704 Change Count : 0 00:32:31.704 Number of ANA Group Descriptors : 1 00:32:31.704 ANA Group Descriptor : 0 00:32:31.704 ANA Group ID : 1 00:32:31.704 Number of NSID Values : 1 00:32:31.704 Change Count : 0 00:32:31.704 ANA State : 1 00:32:31.704 Namespace Identifier : 1 00:32:31.704 00:32:31.704 Commands Supported and Effects 00:32:31.704 ============================== 00:32:31.704 Admin Commands 00:32:31.704 -------------- 00:32:31.704 Get Log Page (02h): Supported 00:32:31.704 Identify (06h): Supported 00:32:31.704 Abort (08h): Supported 00:32:31.704 Set Features (09h): Supported 00:32:31.704 Get Features (0Ah): Supported 00:32:31.704 Asynchronous Event Request (0Ch): Supported 00:32:31.704 Keep Alive (18h): Supported 00:32:31.704 I/O Commands 00:32:31.704 ------------ 00:32:31.704 Flush (00h): Supported 00:32:31.704 Write (01h): Supported LBA-Change 00:32:31.704 Read (02h): Supported 00:32:31.704 Write Zeroes (08h): Supported LBA-Change 00:32:31.704 Dataset Management (09h): Supported 00:32:31.704 00:32:31.704 Error Log 00:32:31.704 ========= 00:32:31.704 Entry: 0 00:32:31.704 Error Count: 0x3 00:32:31.704 Submission Queue Id: 0x0 00:32:31.704 Command Id: 0x5 00:32:31.704 Phase Bit: 0 00:32:31.704 Status Code: 0x2 00:32:31.704 Status Code Type: 0x0 00:32:31.704 Do Not Retry: 1 00:32:31.963 Error Location: 0x28 00:32:31.963 LBA: 0x0 00:32:31.963 Namespace: 0x0 00:32:31.963 Vendor Log Page: 0x0 00:32:31.963 ----------- 00:32:31.963 Entry: 1 00:32:31.963 Error Count: 0x2 00:32:31.963 Submission Queue Id: 0x0 00:32:31.963 Command Id: 0x5 00:32:31.963 Phase Bit: 0 00:32:31.963 Status Code: 0x2 00:32:31.963 Status Code Type: 0x0 00:32:31.963 Do Not Retry: 1 00:32:31.963 Error Location: 0x28 00:32:31.963 LBA: 0x0 00:32:31.963 Namespace: 0x0 00:32:31.963 Vendor Log Page: 0x0 00:32:31.963 ----------- 00:32:31.963 Entry: 2 00:32:31.963 Error Count: 0x1 00:32:31.963 Submission Queue Id: 0x0 00:32:31.963 Command Id: 0x4 00:32:31.963 Phase Bit: 0 00:32:31.963 Status Code: 0x2 00:32:31.963 Status Code Type: 0x0 00:32:31.963 Do Not Retry: 1 00:32:31.963 Error Location: 0x28 00:32:31.963 LBA: 0x0 00:32:31.963 Namespace: 0x0 00:32:31.963 Vendor Log Page: 0x0 00:32:31.963 00:32:31.963 Number of Queues 00:32:31.963 ================ 00:32:31.963 Number of I/O Submission Queues: 128 00:32:31.963 Number of I/O Completion Queues: 128 00:32:31.963 00:32:31.963 ZNS Specific Controller Data 00:32:31.963 ============================ 00:32:31.963 Zone Append Size Limit: 0 00:32:31.963 00:32:31.963 00:32:31.963 Active Namespaces 00:32:31.963 ================= 00:32:31.963 get_feature(0x05) failed 00:32:31.963 Namespace ID:1 00:32:31.963 Command Set Identifier: NVM (00h) 00:32:31.963 Deallocate: Supported 00:32:31.963 Deallocated/Unwritten Error: Not Supported 00:32:31.963 Deallocated Read Value: Unknown 00:32:31.963 Deallocate in Write Zeroes: Not Supported 00:32:31.963 Deallocated Guard Field: 0xFFFF 00:32:31.963 Flush: Supported 00:32:31.963 Reservation: Not Supported 00:32:31.963 Namespace Sharing Capabilities: Multiple Controllers 00:32:31.963 Size (in LBAs): 1953525168 (931GiB) 00:32:31.963 Capacity (in LBAs): 1953525168 (931GiB) 00:32:31.963 Utilization (in LBAs): 1953525168 (931GiB) 00:32:31.963 UUID: 4623c557-67fd-4878-b5bc-e9d73792c848 00:32:31.963 Thin Provisioning: Not Supported 00:32:31.963 Per-NS Atomic Units: Yes 00:32:31.963 Atomic Boundary Size (Normal): 0 00:32:31.963 Atomic Boundary Size (PFail): 0 00:32:31.963 Atomic Boundary Offset: 0 00:32:31.963 NGUID/EUI64 Never Reused: No 00:32:31.963 ANA group ID: 1 00:32:31.963 Namespace Write Protected: No 00:32:31.963 Number of LBA Formats: 1 00:32:31.963 Current LBA Format: LBA Format #00 00:32:31.963 LBA Format #00: Data Size: 512 Metadata Size: 0 00:32:31.963 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:32:31.963 rmmod nvme_tcp 00:32:31.963 rmmod nvme_fabrics 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:31.963 21:39:06 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:32:33.867 21:39:08 nvmf_tcp.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:32:35.241 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:35.241 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:32:35.241 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:32:36.176 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:32:36.176 00:32:36.176 real 0m9.246s 00:32:36.176 user 0m1.873s 00:32:36.176 sys 0m3.336s 00:32:36.176 21:39:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:32:36.176 21:39:10 nvmf_tcp.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:32:36.176 ************************************ 00:32:36.176 END TEST nvmf_identify_kernel_target 00:32:36.176 ************************************ 00:32:36.176 21:39:10 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:32:36.176 21:39:10 nvmf_tcp -- nvmf/nvmf.sh@105 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:36.176 21:39:10 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:32:36.176 21:39:10 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:32:36.176 21:39:10 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:36.176 ************************************ 00:32:36.176 START TEST nvmf_auth_host 00:32:36.176 ************************************ 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:32:36.176 * Looking for test storage... 00:32:36.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:36.176 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:32:36.177 21:39:10 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.706 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:32:38.706 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:32:38.707 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:32:38.707 Found net devices under 0000:0a:00.0: cvl_0_0 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:32:38.707 Found net devices under 0000:0a:00.1: cvl_0_1 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:32:38.707 21:39:12 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:32:38.707 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:38.707 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.125 ms 00:32:38.707 00:32:38.707 --- 10.0.0.2 ping statistics --- 00:32:38.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.707 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:38.707 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:38.707 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.095 ms 00:32:38.707 00:32:38.707 --- 10.0.0.1 ping statistics --- 00:32:38.707 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:38.707 rtt min/avg/max/mdev = 0.095/0.095/0.095/0.000 ms 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=1044125 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 1044125 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1044125 ']' 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=82965dfbbb8ab7124cc72de6050ad038 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.k0d 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 82965dfbbb8ab7124cc72de6050ad038 0 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 82965dfbbb8ab7124cc72de6050ad038 0 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=82965dfbbb8ab7124cc72de6050ad038 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.k0d 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.k0d 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.k0d 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b3d4a694150e31c9bd4cf6d90c84b0a78352209e3f6705857fb668847590cccd 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:38.707 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7Qu 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b3d4a694150e31c9bd4cf6d90c84b0a78352209e3f6705857fb668847590cccd 3 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b3d4a694150e31c9bd4cf6d90c84b0a78352209e3f6705857fb668847590cccd 3 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b3d4a694150e31c9bd4cf6d90c84b0a78352209e3f6705857fb668847590cccd 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:38.708 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7Qu 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7Qu 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.7Qu 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5082ab79b0ac3f491338468f75f149e2e04ad7fac0fea92c 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.ZKY 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5082ab79b0ac3f491338468f75f149e2e04ad7fac0fea92c 0 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5082ab79b0ac3f491338468f75f149e2e04ad7fac0fea92c 0 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5082ab79b0ac3f491338468f75f149e2e04ad7fac0fea92c 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.ZKY 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.ZKY 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.ZKY 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=818206e88899d2fedbbab649e84b99da2dc4923a6408f73c 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.2YG 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 818206e88899d2fedbbab649e84b99da2dc4923a6408f73c 2 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 818206e88899d2fedbbab649e84b99da2dc4923a6408f73c 2 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=818206e88899d2fedbbab649e84b99da2dc4923a6408f73c 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.2YG 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.2YG 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.2YG 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:38.965 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f14403bbd40b50e5cfc56d07e13fa3df 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.0Qn 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f14403bbd40b50e5cfc56d07e13fa3df 1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f14403bbd40b50e5cfc56d07e13fa3df 1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f14403bbd40b50e5cfc56d07e13fa3df 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.0Qn 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.0Qn 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.0Qn 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=03fd4e8ca7aed6e1716e3054bc3581d8 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Cem 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 03fd4e8ca7aed6e1716e3054bc3581d8 1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 03fd4e8ca7aed6e1716e3054bc3581d8 1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=03fd4e8ca7aed6e1716e3054bc3581d8 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:38.966 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Cem 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Cem 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Cem 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ecdd7c893c039f65f421682a7b42ac8aad9af59d0dc79f95 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ka5 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ecdd7c893c039f65f421682a7b42ac8aad9af59d0dc79f95 2 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ecdd7c893c039f65f421682a7b42ac8aad9af59d0dc79f95 2 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ecdd7c893c039f65f421682a7b42ac8aad9af59d0dc79f95 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ka5 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ka5 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.ka5 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=cbb6a5d2aab6512b9948179bbb7e446f 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.nCI 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key cbb6a5d2aab6512b9948179bbb7e446f 0 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 cbb6a5d2aab6512b9948179bbb7e446f 0 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=cbb6a5d2aab6512b9948179bbb7e446f 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.nCI 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.nCI 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.nCI 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:32:39.223 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@727 -- # key=7a2300b0b061c912d6b1a01e3245e8a87e58854884e571062afd3fa498e39bfb 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.a12 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 7a2300b0b061c912d6b1a01e3245e8a87e58854884e571062afd3fa498e39bfb 3 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 7a2300b0b061c912d6b1a01e3245e8a87e58854884e571062afd3fa498e39bfb 3 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # key=7a2300b0b061c912d6b1a01e3245e8a87e58854884e571062afd3fa498e39bfb 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.a12 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.a12 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.a12 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 1044125 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 1044125 ']' 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:39.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:32:39.224 21:39:13 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.k0d 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.7Qu ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.7Qu 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.ZKY 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.2YG ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.2YG 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.0Qn 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Cem ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Cem 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.ka5 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.nCI ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.nCI 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.a12 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:32:39.482 21:39:14 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:32:40.855 Waiting for block devices as requested 00:32:40.855 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:32:40.855 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:40.855 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:41.113 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:41.113 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:41.113 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:41.113 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:41.371 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:41.371 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:41.371 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:32:41.371 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:32:41.627 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:32:41.627 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:32:41.627 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:32:41.627 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:32:41.883 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:32:41.883 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:32:42.140 21:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:32:42.397 No valid GPT data, bailing 00:32:42.397 21:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:32:42.397 21:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:32:42.397 21:39:16 nvmf_tcp.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:32:42.397 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:32:42.397 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:32:42.398 21:39:16 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:32:42.398 00:32:42.398 Discovery Log Number of Records 2, Generation counter 2 00:32:42.398 =====Discovery Log Entry 0====== 00:32:42.398 trtype: tcp 00:32:42.398 adrfam: ipv4 00:32:42.398 subtype: current discovery subsystem 00:32:42.398 treq: not specified, sq flow control disable supported 00:32:42.398 portid: 1 00:32:42.398 trsvcid: 4420 00:32:42.398 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:32:42.398 traddr: 10.0.0.1 00:32:42.398 eflags: none 00:32:42.398 sectype: none 00:32:42.398 =====Discovery Log Entry 1====== 00:32:42.398 trtype: tcp 00:32:42.398 adrfam: ipv4 00:32:42.398 subtype: nvme subsystem 00:32:42.398 treq: not specified, sq flow control disable supported 00:32:42.398 portid: 1 00:32:42.398 trsvcid: 4420 00:32:42.398 subnqn: nqn.2024-02.io.spdk:cnode0 00:32:42.398 traddr: 10.0.0.1 00:32:42.398 eflags: none 00:32:42.398 sectype: none 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.398 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.655 nvme0n1 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.655 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.912 nvme0n1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:42.912 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.169 nvme0n1 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.169 nvme0n1 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.169 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.426 21:39:17 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 nvme0n1 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.426 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.690 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.691 nvme0n1 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.691 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.986 nvme0n1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:43.986 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.289 nvme0n1 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.289 21:39:18 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.546 nvme0n1 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:44.546 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.547 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.804 nvme0n1 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:44.804 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:44.805 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.061 nvme0n1 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:32:45.061 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.062 21:39:19 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.319 nvme0n1 00:32:45.319 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.319 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.320 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.884 nvme0n1 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:45.884 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 nvme0n1 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.142 21:39:20 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.400 nvme0n1 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.400 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.966 nvme0n1 00:32:46.966 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.966 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:46.966 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:46.966 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.966 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.966 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:46.967 21:39:21 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.532 nvme0n1 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:47.532 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:47.533 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:47.533 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:47.533 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 nvme0n1 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:48.098 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.099 21:39:22 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.663 nvme0n1 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:48.663 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.225 nvme0n1 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.225 21:39:23 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.482 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:49.483 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.047 nvme0n1 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.047 21:39:24 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.980 nvme0n1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:50.980 21:39:25 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.913 nvme0n1 00:32:51.913 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.913 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:51.913 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.913 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:51.913 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:51.913 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:51.914 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:51.914 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:51.914 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:51.914 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:52.171 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:52.172 21:39:26 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.104 nvme0n1 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:53.104 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:53.105 21:39:27 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.036 nvme0n1 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:54.036 21:39:28 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 nvme0n1 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 nvme0n1 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.411 21:39:29 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.411 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 nvme0n1 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.669 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.926 nvme0n1 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:55.926 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.927 nvme0n1 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:55.927 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.184 nvme0n1 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.184 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.185 21:39:30 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.449 nvme0n1 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.449 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.711 nvme0n1 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.711 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 nvme0n1 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:56.968 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.225 nvme0n1 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.225 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.226 21:39:31 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.484 nvme0n1 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:57.484 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:57.485 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.051 nvme0n1 00:32:58.051 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.052 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.310 nvme0n1 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.310 21:39:32 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.567 nvme0n1 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:32:58.567 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:58.568 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.132 nvme0n1 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.132 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.388 nvme0n1 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.388 21:39:33 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:32:59.388 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.389 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.981 nvme0n1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:32:59.981 21:39:34 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 nvme0n1 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:00.545 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.111 nvme0n1 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.111 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.369 21:39:35 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.936 nvme0n1 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:01.936 21:39:36 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.516 nvme0n1 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:02.516 21:39:37 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.449 nvme0n1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:03.449 21:39:38 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.823 nvme0n1 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:04.823 21:39:39 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.757 nvme0n1 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:05.757 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:05.758 21:39:40 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.692 nvme0n1 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:06.692 21:39:41 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.626 nvme0n1 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:07.626 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 nvme0n1 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.627 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:07.885 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.886 nvme0n1 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:07.886 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.145 nvme0n1 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.145 21:39:42 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.405 nvme0n1 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.405 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.663 nvme0n1 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.663 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 nvme0n1 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:08.920 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:08.921 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.178 nvme0n1 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.178 21:39:43 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.436 nvme0n1 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.436 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.694 nvme0n1 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.694 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.951 nvme0n1 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:09.952 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.210 nvme0n1 00:33:10.210 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.210 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.210 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.210 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.210 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.210 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.467 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.467 21:39:44 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.467 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.467 21:39:44 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:10.467 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.468 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.725 nvme0n1 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.725 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.983 nvme0n1 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.983 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:10.984 21:39:45 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.548 nvme0n1 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:11.548 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.549 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.806 nvme0n1 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:11.806 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:11.807 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.372 nvme0n1 00:33:12.372 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.372 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.372 21:39:46 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.372 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.372 21:39:46 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.372 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.373 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.939 nvme0n1 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:12.939 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:12.940 21:39:47 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.506 nvme0n1 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:33:13.506 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:13.507 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.072 nvme0n1 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.072 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.073 21:39:48 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.073 21:39:48 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:14.073 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.073 21:39:48 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.638 nvme0n1 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODI5NjVkZmJiYjhhYjcxMjRjYzcyZGU2MDUwYWQwMzgMyFu+: 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YjNkNGE2OTQxNTBlMzFjOWJkNGNmNmQ5MGM4NGIwYTc4MzUyMjA5ZTNmNjcwNTg1N2ZiNjY4ODQ3NTkwY2NjZI9wDAg=: 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:14.639 21:39:49 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.012 nvme0n1 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.012 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.013 21:39:50 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.978 nvme0n1 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:ZjE0NDAzYmJkNDBiNTBlNWNmYzU2ZDA3ZTEzZmEzZGbHcI+5: 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: ]] 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MDNmZDRlOGNhN2FlZDZlMTcxNmUzMDU0YmMzNTgxZDhNlX6h: 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:16.978 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:16.979 21:39:51 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.918 nvme0n1 00:33:17.918 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.918 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:17.918 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZWNkZDdjODkzYzAzOWY2NWY0MjE2ODJhN2I0MmFjOGFhZDlhZjU5ZDBkYzc5Zjk1iY/KfQ==: 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:Y2JiNmE1ZDJhYWI2NTEyYjk5NDgxNzliYmI3ZTQ0NmZ3ScKj: 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:17.919 21:39:52 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 nvme0n1 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:N2EyMzAwYjBiMDYxYzkxMmQ2YjFhMDFlMzI0NWU4YTg3ZTU4ODU0ODg0ZTU3MTA2MmFmZDNmYTQ5OGUzOWJmYuzCC1A=: 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:18.850 21:39:53 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.784 nvme0n1 00:33:19.784 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:19.784 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:33:19.784 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:19.784 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:19.784 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:33:19.784 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:NTA4MmFiNzliMGFjM2Y0OTEzMzg0NjhmNzVmMTQ5ZTJlMDRhZDdmYWMwZmVhOTJj6waGNw==: 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: ]] 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ODE4MjA2ZTg4ODk5ZDJmZWRiYmFiNjQ5ZTg0Yjk5ZGEyZGM0OTIzYTY0MDhmNzNjg9uQfA==: 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:33:20.042 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.043 request: 00:33:20.043 { 00:33:20.043 "name": "nvme0", 00:33:20.043 "trtype": "tcp", 00:33:20.043 "traddr": "10.0.0.1", 00:33:20.043 "adrfam": "ipv4", 00:33:20.043 "trsvcid": "4420", 00:33:20.043 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:20.043 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:20.043 "prchk_reftag": false, 00:33:20.043 "prchk_guard": false, 00:33:20.043 "hdgst": false, 00:33:20.043 "ddgst": false, 00:33:20.043 "method": "bdev_nvme_attach_controller", 00:33:20.043 "req_id": 1 00:33:20.043 } 00:33:20.043 Got JSON-RPC error response 00:33:20.043 response: 00:33:20.043 { 00:33:20.043 "code": -5, 00:33:20.043 "message": "Input/output error" 00:33:20.043 } 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.043 request: 00:33:20.043 { 00:33:20.043 "name": "nvme0", 00:33:20.043 "trtype": "tcp", 00:33:20.043 "traddr": "10.0.0.1", 00:33:20.043 "adrfam": "ipv4", 00:33:20.043 "trsvcid": "4420", 00:33:20.043 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:20.043 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:20.043 "prchk_reftag": false, 00:33:20.043 "prchk_guard": false, 00:33:20.043 "hdgst": false, 00:33:20.043 "ddgst": false, 00:33:20.043 "dhchap_key": "key2", 00:33:20.043 "method": "bdev_nvme_attach_controller", 00:33:20.043 "req_id": 1 00:33:20.043 } 00:33:20.043 Got JSON-RPC error response 00:33:20.043 response: 00:33:20.043 { 00:33:20.043 "code": -5, 00:33:20.043 "message": "Input/output error" 00:33:20.043 } 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.043 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:20.302 request: 00:33:20.302 { 00:33:20.302 "name": "nvme0", 00:33:20.302 "trtype": "tcp", 00:33:20.302 "traddr": "10.0.0.1", 00:33:20.302 "adrfam": "ipv4", 00:33:20.302 "trsvcid": "4420", 00:33:20.302 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:33:20.302 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:33:20.302 "prchk_reftag": false, 00:33:20.302 "prchk_guard": false, 00:33:20.302 "hdgst": false, 00:33:20.302 "ddgst": false, 00:33:20.302 "dhchap_key": "key1", 00:33:20.302 "dhchap_ctrlr_key": "ckey2", 00:33:20.302 "method": "bdev_nvme_attach_controller", 00:33:20.302 "req_id": 1 00:33:20.302 } 00:33:20.302 Got JSON-RPC error response 00:33:20.302 response: 00:33:20.302 { 00:33:20.302 "code": -5, 00:33:20.302 "message": "Input/output error" 00:33:20.302 } 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:20.302 rmmod nvme_tcp 00:33:20.302 rmmod nvme_fabrics 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 1044125 ']' 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 1044125 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 1044125 ']' 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 1044125 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1044125 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1044125' 00:33:20.302 killing process with pid 1044125 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 1044125 00:33:20.302 21:39:54 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 1044125 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:20.561 21:39:55 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:33:23.089 21:39:57 nvmf_tcp.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:23.655 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:23.655 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:23.655 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:23.913 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:23.913 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:23.913 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:23.913 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:23.913 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:33:23.913 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:33:24.848 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:33:24.848 21:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.k0d /tmp/spdk.key-null.ZKY /tmp/spdk.key-sha256.0Qn /tmp/spdk.key-sha384.ka5 /tmp/spdk.key-sha512.a12 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:33:24.848 21:39:59 nvmf_tcp.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:26.219 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:26.219 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:26.219 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:26.219 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:26.219 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:26.219 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:26.219 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:26.219 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:26.219 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:26.219 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:33:26.219 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:33:26.219 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:33:26.219 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:33:26.219 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:33:26.219 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:33:26.219 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:33:26.219 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:33:26.219 00:33:26.219 real 0m50.119s 00:33:26.219 user 0m47.379s 00:33:26.219 sys 0m5.787s 00:33:26.219 21:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:26.219 21:40:00 nvmf_tcp.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:33:26.219 ************************************ 00:33:26.219 END TEST nvmf_auth_host 00:33:26.219 ************************************ 00:33:26.219 21:40:00 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:33:26.219 21:40:00 nvmf_tcp -- nvmf/nvmf.sh@107 -- # [[ tcp == \t\c\p ]] 00:33:26.219 21:40:00 nvmf_tcp -- nvmf/nvmf.sh@108 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:26.219 21:40:00 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:33:26.219 21:40:00 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:26.219 21:40:00 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:26.219 ************************************ 00:33:26.219 START TEST nvmf_digest 00:33:26.219 ************************************ 00:33:26.219 21:40:00 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:33:26.476 * Looking for test storage... 00:33:26.476 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:26.476 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:33:26.477 21:40:01 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:33:28.398 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:33:28.398 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:33:28.398 Found net devices under 0000:0a:00.0: cvl_0_0 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:33:28.398 Found net devices under 0000:0a:00.1: cvl_0_1 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:33:28.398 21:40:02 nvmf_tcp.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:33:28.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:28.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:33:28.398 00:33:28.398 --- 10.0.0.2 ping statistics --- 00:33:28.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.398 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:28.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:28.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.107 ms 00:33:28.398 00:33:28.398 --- 10.0.0.1 ping statistics --- 00:33:28.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:28.398 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:28.398 ************************************ 00:33:28.398 START TEST nvmf_digest_clean 00:33:28.398 ************************************ 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=1053573 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 1053573 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1053573 ']' 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:28.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:28.398 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.656 [2024-07-11 21:40:03.181885] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:28.656 [2024-07-11 21:40:03.181965] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:28.656 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.656 [2024-07-11 21:40:03.246317] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:28.656 [2024-07-11 21:40:03.328568] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:28.656 [2024-07-11 21:40:03.328638] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:28.656 [2024-07-11 21:40:03.328661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:28.656 [2024-07-11 21:40:03.328672] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:28.656 [2024-07-11 21:40:03.328682] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:28.656 [2024-07-11 21:40:03.328706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:28.656 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.913 null0 00:33:28.913 [2024-07-11 21:40:03.520536] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:28.913 [2024-07-11 21:40:03.544777] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1053594 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1053594 /var/tmp/bperf.sock 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1053594 ']' 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:28.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:28.913 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:28.913 [2024-07-11 21:40:03.593809] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:28.913 [2024-07-11 21:40:03.593902] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1053594 ] 00:33:28.913 EAL: No free 2048 kB hugepages reported on node 1 00:33:28.913 [2024-07-11 21:40:03.657217] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:29.171 [2024-07-11 21:40:03.750900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:29.171 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:29.171 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:29.171 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:29.171 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:29.171 21:40:03 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:29.429 21:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.429 21:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:29.993 nvme0n1 00:33:29.993 21:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:29.993 21:40:04 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:29.993 Running I/O for 2 seconds... 00:33:31.932 00:33:31.932 Latency(us) 00:33:31.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:31.932 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:31.932 nvme0n1 : 2.01 18844.65 73.61 0.00 0.00 6782.38 3519.53 15825.73 00:33:31.932 =================================================================================================================== 00:33:31.932 Total : 18844.65 73.61 0.00 0.00 6782.38 3519.53 15825.73 00:33:31.932 0 00:33:31.932 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:31.932 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:31.932 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:31.932 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:31.932 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:31.932 | select(.opcode=="crc32c") 00:33:31.932 | "\(.module_name) \(.executed)"' 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1053594 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1053594 ']' 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1053594 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1053594 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1053594' 00:33:32.191 killing process with pid 1053594 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1053594 00:33:32.191 Received shutdown signal, test time was about 2.000000 seconds 00:33:32.191 00:33:32.191 Latency(us) 00:33:32.191 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:32.191 =================================================================================================================== 00:33:32.191 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:32.191 21:40:06 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1053594 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1054003 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1054003 /var/tmp/bperf.sock 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1054003 ']' 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:32.449 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:32.449 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:32.449 [2024-07-11 21:40:07.191493] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:32.449 [2024-07-11 21:40:07.191585] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054003 ] 00:33:32.449 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:32.449 Zero copy mechanism will not be used. 00:33:32.449 EAL: No free 2048 kB hugepages reported on node 1 00:33:32.708 [2024-07-11 21:40:07.249986] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:32.708 [2024-07-11 21:40:07.334564] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.708 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:32.708 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:32.708 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:32.708 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:32.708 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:33.273 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.273 21:40:07 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:33.531 nvme0n1 00:33:33.531 21:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:33.531 21:40:08 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:33.531 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:33.531 Zero copy mechanism will not be used. 00:33:33.531 Running I/O for 2 seconds... 00:33:35.431 00:33:35.431 Latency(us) 00:33:35.431 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:35.431 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:35.431 nvme0n1 : 2.00 4592.39 574.05 0.00 0.00 3479.33 794.93 10922.67 00:33:35.431 =================================================================================================================== 00:33:35.431 Total : 4592.39 574.05 0.00 0.00 3479.33 794.93 10922.67 00:33:35.431 0 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:35.691 | select(.opcode=="crc32c") 00:33:35.691 | "\(.module_name) \(.executed)"' 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:35.691 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1054003 00:33:35.692 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1054003 ']' 00:33:35.692 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1054003 00:33:35.692 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:35.692 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:35.692 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054003 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054003' 00:33:36.013 killing process with pid 1054003 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1054003 00:33:36.013 Received shutdown signal, test time was about 2.000000 seconds 00:33:36.013 00:33:36.013 Latency(us) 00:33:36.013 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:36.013 =================================================================================================================== 00:33:36.013 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1054003 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1054403 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1054403 /var/tmp/bperf.sock 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1054403 ']' 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:36.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:36.013 21:40:10 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:36.271 [2024-07-11 21:40:10.757499] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:36.271 [2024-07-11 21:40:10.757566] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054403 ] 00:33:36.271 EAL: No free 2048 kB hugepages reported on node 1 00:33:36.271 [2024-07-11 21:40:10.819720] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:36.271 [2024-07-11 21:40:10.910978] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:36.271 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:36.271 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:36.271 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:36.271 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:36.271 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:36.836 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:36.836 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:37.093 nvme0n1 00:33:37.093 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:37.093 21:40:11 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:37.093 Running I/O for 2 seconds... 00:33:39.622 00:33:39.622 Latency(us) 00:33:39.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.622 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:39.622 nvme0n1 : 2.00 20437.74 79.83 0.00 0.00 6252.64 2888.44 10243.03 00:33:39.622 =================================================================================================================== 00:33:39.622 Total : 20437.74 79.83 0.00 0.00 6252.64 2888.44 10243.03 00:33:39.622 0 00:33:39.622 21:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:39.622 21:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:39.622 21:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:39.622 21:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:39.622 21:40:13 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:39.622 | select(.opcode=="crc32c") 00:33:39.622 | "\(.module_name) \(.executed)"' 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1054403 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1054403 ']' 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1054403 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054403 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054403' 00:33:39.622 killing process with pid 1054403 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1054403 00:33:39.622 Received shutdown signal, test time was about 2.000000 seconds 00:33:39.622 00:33:39.622 Latency(us) 00:33:39.622 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:39.622 =================================================================================================================== 00:33:39.622 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1054403 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:33:39.622 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=1054894 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 1054894 /var/tmp/bperf.sock 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 1054894 ']' 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:39.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:39.623 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:39.623 [2024-07-11 21:40:14.332334] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:39.623 [2024-07-11 21:40:14.332427] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1054894 ] 00:33:39.623 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:39.623 Zero copy mechanism will not be used. 00:33:39.623 EAL: No free 2048 kB hugepages reported on node 1 00:33:39.623 [2024-07-11 21:40:14.392726] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:39.881 [2024-07-11 21:40:14.483568] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:39.881 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:39.881 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:33:39.881 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:33:39.881 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:33:39.881 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:33:40.139 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.139 21:40:14 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:40.706 nvme0n1 00:33:40.706 21:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:33:40.706 21:40:15 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:40.706 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:40.706 Zero copy mechanism will not be used. 00:33:40.706 Running I/O for 2 seconds... 00:33:42.604 00:33:42.605 Latency(us) 00:33:42.605 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:42.605 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:42.605 nvme0n1 : 2.00 4436.37 554.55 0.00 0.00 3597.55 2402.99 9417.77 00:33:42.605 =================================================================================================================== 00:33:42.605 Total : 4436.37 554.55 0.00 0.00 3597.55 2402.99 9417.77 00:33:42.605 0 00:33:42.605 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:33:42.605 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:33:42.605 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:33:42.605 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:33:42.605 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:33:42.605 | select(.opcode=="crc32c") 00:33:42.605 | "\(.module_name) \(.executed)"' 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 1054894 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1054894 ']' 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1054894 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:42.863 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1054894 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1054894' 00:33:43.121 killing process with pid 1054894 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1054894 00:33:43.121 Received shutdown signal, test time was about 2.000000 seconds 00:33:43.121 00:33:43.121 Latency(us) 00:33:43.121 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:43.121 =================================================================================================================== 00:33:43.121 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1054894 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 1053573 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 1053573 ']' 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 1053573 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1053573 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1053573' 00:33:43.121 killing process with pid 1053573 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 1053573 00:33:43.121 21:40:17 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 1053573 00:33:43.380 00:33:43.380 real 0m14.961s 00:33:43.380 user 0m29.856s 00:33:43.380 sys 0m3.956s 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:33:43.380 ************************************ 00:33:43.380 END TEST nvmf_digest_clean 00:33:43.380 ************************************ 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:33:43.380 ************************************ 00:33:43.380 START TEST nvmf_digest_error 00:33:43.380 ************************************ 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=1055371 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 1055371 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1055371 ']' 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:43.380 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.380 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.638 [2024-07-11 21:40:18.190014] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:43.638 [2024-07-11 21:40:18.190100] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:43.638 EAL: No free 2048 kB hugepages reported on node 1 00:33:43.638 [2024-07-11 21:40:18.263301] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:43.638 [2024-07-11 21:40:18.353059] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:43.638 [2024-07-11 21:40:18.353121] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:43.638 [2024-07-11 21:40:18.353146] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:43.638 [2024-07-11 21:40:18.353160] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:43.638 [2024-07-11 21:40:18.353172] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:43.638 [2024-07-11 21:40:18.353206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.897 [2024-07-11 21:40:18.461881] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.897 null0 00:33:43.897 [2024-07-11 21:40:18.580269] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:43.897 [2024-07-11 21:40:18.604489] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:43.897 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1055392 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1055392 /var/tmp/bperf.sock 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1055392 ']' 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:43.898 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:43.898 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:43.898 [2024-07-11 21:40:18.654372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:43.898 [2024-07-11 21:40:18.654447] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055392 ] 00:33:44.156 EAL: No free 2048 kB hugepages reported on node 1 00:33:44.156 [2024-07-11 21:40:18.722062] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:44.156 [2024-07-11 21:40:18.816835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:44.414 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:44.414 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:44.414 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.414 21:40:18 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:44.672 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:44.672 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.672 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.672 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.672 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.672 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:44.929 nvme0n1 00:33:44.929 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:44.929 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:44.929 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:44.929 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:44.929 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:44.929 21:40:19 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:45.187 Running I/O for 2 seconds... 00:33:45.187 [2024-07-11 21:40:19.762431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.762477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.762523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.778562] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.778599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23667 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.778628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.794266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.794299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.794323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.806784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.806845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:2206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.806863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.820094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.820129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2934 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.820154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.834646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.834681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:6556 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.834702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.848916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.848948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:12424 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.848965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.861415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.861445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:2105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.861468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.876241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.876276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.876295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.893093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.893148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:25303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.893168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.908937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.187 [2024-07-11 21:40:19.908969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.187 [2024-07-11 21:40:19.908987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.187 [2024-07-11 21:40:19.920868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.188 [2024-07-11 21:40:19.920899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:2078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.188 [2024-07-11 21:40:19.920916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.188 [2024-07-11 21:40:19.935359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.188 [2024-07-11 21:40:19.935394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:2017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.188 [2024-07-11 21:40:19.935426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.188 [2024-07-11 21:40:19.948988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.188 [2024-07-11 21:40:19.949035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.188 [2024-07-11 21:40:19.949053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.446 [2024-07-11 21:40:19.961319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.446 [2024-07-11 21:40:19.961354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:4172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.446 [2024-07-11 21:40:19.961377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.446 [2024-07-11 21:40:19.976149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:19.976183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:13266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:19.976202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:19.994274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:19.994308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:22059 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:19.994327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.007885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.007918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:19236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.007945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.019849] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.019881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:5883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.019906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.035437] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.035485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:9546 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.035511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.046946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.046980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:5428 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.047006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.063084] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.063131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:11170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.063154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.075604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.075639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.075657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.089904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.089949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:15084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.089966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.103241] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.103274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:13039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.103293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.116587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.116621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:4869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.116639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.130537] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.130571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:19470 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.130590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.142428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.142460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.142480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.155458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.155487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.155513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.170708] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.170741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:926 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.170780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.181988] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.182016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.182033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.197377] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.197407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:2677 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.197444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.447 [2024-07-11 21:40:20.213789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.447 [2024-07-11 21:40:20.213836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:13803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.447 [2024-07-11 21:40:20.213856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.227259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.705 [2024-07-11 21:40:20.227294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:16168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.705 [2024-07-11 21:40:20.227313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.239216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.705 [2024-07-11 21:40:20.239250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:1159 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.705 [2024-07-11 21:40:20.239270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.252523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.705 [2024-07-11 21:40:20.252557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.705 [2024-07-11 21:40:20.252576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.267485] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.705 [2024-07-11 21:40:20.267520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:20672 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.705 [2024-07-11 21:40:20.267539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.279303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.705 [2024-07-11 21:40:20.279337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:6354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.705 [2024-07-11 21:40:20.279356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.296119] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.705 [2024-07-11 21:40:20.296161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:6519 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.705 [2024-07-11 21:40:20.296182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.705 [2024-07-11 21:40:20.309776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.309823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.309841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.324118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.324153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13676 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.324172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.342173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.342207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:25400 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.342226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.359571] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.359605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.359623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.370983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.371026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.371045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.386661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.386696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:9105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.386714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.401608] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.401642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.401661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.417555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.417589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.417608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.434259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.434294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:13695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.434313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.446677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.446710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:6848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.446731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.461219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.461254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.461273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.706 [2024-07-11 21:40:20.474577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.706 [2024-07-11 21:40:20.474612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:9942 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.706 [2024-07-11 21:40:20.474632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.487354] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.487388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:12321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.487408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.502543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.502577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:5242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.502597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.515190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.515225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.515244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.529666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.529701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11973 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.529720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.547320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.547356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:18905 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.547382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.559371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.559406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:18992 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.559424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.574954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.574985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18638 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.575002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.589610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.964 [2024-07-11 21:40:20.589645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:11716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.964 [2024-07-11 21:40:20.589664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.964 [2024-07-11 21:40:20.601240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.601273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:14078 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.601292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.617157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.617191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:18246 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.617211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.635502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.635536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:1646 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.635555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.651477] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.651513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:3692 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.651532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.664489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.664524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:2130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.664543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.680773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.680808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:15115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.680841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.696905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.696936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2221 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.696952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.710293] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.710327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:10823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.710346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:45.965 [2024-07-11 21:40:20.722116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:45.965 [2024-07-11 21:40:20.722151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:269 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:45.965 [2024-07-11 21:40:20.722170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.738487] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.738522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:9922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.738541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.752590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.752621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:21046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.752653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.764418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.764453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:24525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.764471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.780443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.780479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:21716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.780499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.795502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.795537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10139 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.795563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.807395] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.807429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:17704 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.807448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.822170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.822204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:11369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.822224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.834319] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.834353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:11709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.834372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.847762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.847821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:14520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.847838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.861037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.861086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:12560 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.861105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.876871] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.876920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4725 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.876936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.888852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.888889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:3660 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.888905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.904574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.904604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:16684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.904635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.920381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.920421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:3309 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.920441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.932400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.932434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:21305 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.932453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.948767] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.948797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:13176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.948815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.964345] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.964380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:7670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.964399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.975058] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.975106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.975127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.224 [2024-07-11 21:40:20.991296] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.224 [2024-07-11 21:40:20.991332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:10479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.224 [2024-07-11 21:40:20.991351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.003287] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.003321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:7846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.003340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.018970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.018999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:7179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.019014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.034138] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.034168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:24998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.034185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.048826] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.048857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.048873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.060605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.060640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.060659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.075292] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.075328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:7753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.075347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.089291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.089325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:16302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.089344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.102371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.102405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4296 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.102425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.114956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.114988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.115005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.129764] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.129813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20355 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.129831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.143068] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.143103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:18432 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.483 [2024-07-11 21:40:21.143123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.483 [2024-07-11 21:40:21.155262] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.483 [2024-07-11 21:40:21.155297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:15885 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.155322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.170689] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.170724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:25282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.170743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.183975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.184006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:16506 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.184023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.198335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.198370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14538 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.198389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.210153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.210187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11914 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.210206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.224360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.224406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:14000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.224423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.237605] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.237639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24031 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.237658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.484 [2024-07-11 21:40:21.250405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.484 [2024-07-11 21:40:21.250438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:23464 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.484 [2024-07-11 21:40:21.250457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.264687] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.264722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:20266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.264740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.278461] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.278495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.278514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.291001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.291030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:4695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.291046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.306931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.306976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:11176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.306994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.323872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.323915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:7620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.323932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.337676] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.337709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.337729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.348578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.348609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.348626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.364789] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.364836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:25416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.364853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.378116] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.378151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:17186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.378170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.394896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.394940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:2637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.394962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.410879] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.410924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10411 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.410941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.422938] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.422967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:1653 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.422982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.439080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.439115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11520 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.439134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.454947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.454976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:23634 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.454991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.466797] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.466845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:12155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.466862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.481524] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.481560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.481579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:46.743 [2024-07-11 21:40:21.499626] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:46.743 [2024-07-11 21:40:21.499661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6149 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:46.743 [2024-07-11 21:40:21.499681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.001 [2024-07-11 21:40:21.515208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.001 [2024-07-11 21:40:21.515239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.001 [2024-07-11 21:40:21.515256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.001 [2024-07-11 21:40:21.530661] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.530698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.530716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.543214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.543242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.543258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.557854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.557893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.557911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.572247] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.572279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20713 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.572296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.583794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.583824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.583862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.598575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.598605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5155 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.598622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.613535] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.613566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.613584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.625254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.625282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.625298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.641326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.641355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:12521 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.641371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.653298] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.653330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.653347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.666877] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.666907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:14510 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.666924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.678048] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.678079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:15083 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.678096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.695257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.695288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.695305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.711087] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.711144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:21941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.711161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.725460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.725492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:9137 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.725509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.737390] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.737419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:15959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.737435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.002 [2024-07-11 21:40:21.750985] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x24493c0) 00:33:47.002 [2024-07-11 21:40:21.751016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:47.002 [2024-07-11 21:40:21.751033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:47.260 00:33:47.260 Latency(us) 00:33:47.260 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.260 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:33:47.260 nvme0n1 : 2.05 17607.05 68.78 0.00 0.00 7116.02 3398.16 47768.46 00:33:47.260 =================================================================================================================== 00:33:47.260 Total : 17607.05 68.78 0.00 0.00 7116.02 3398.16 47768.46 00:33:47.260 0 00:33:47.260 21:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:47.260 21:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:47.260 21:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:47.260 21:40:21 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:47.260 | .driver_specific 00:33:47.260 | .nvme_error 00:33:47.260 | .status_code 00:33:47.260 | .command_transient_transport_error' 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 141 > 0 )) 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1055392 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1055392 ']' 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1055392 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055392 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055392' 00:33:47.520 killing process with pid 1055392 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1055392 00:33:47.520 Received shutdown signal, test time was about 2.000000 seconds 00:33:47.520 00:33:47.520 Latency(us) 00:33:47.520 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:47.520 =================================================================================================================== 00:33:47.520 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:47.520 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1055392 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1055916 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:33:47.778 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1055916 /var/tmp/bperf.sock 00:33:47.779 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1055916 ']' 00:33:47.779 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:47.779 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:47.779 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:47.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:47.779 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:47.779 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:47.779 [2024-07-11 21:40:22.355104] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:47.779 [2024-07-11 21:40:22.355201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1055916 ] 00:33:47.779 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:47.779 Zero copy mechanism will not be used. 00:33:47.779 EAL: No free 2048 kB hugepages reported on node 1 00:33:47.779 [2024-07-11 21:40:22.415014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:47.779 [2024-07-11 21:40:22.499899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:48.037 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:48.037 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:48.037 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.037 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:48.294 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:48.294 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.294 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.294 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.294 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.294 21:40:22 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:48.551 nvme0n1 00:33:48.551 21:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:48.551 21:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:48.551 21:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:48.809 21:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:48.809 21:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:48.809 21:40:23 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:48.809 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:48.809 Zero copy mechanism will not be used. 00:33:48.809 Running I/O for 2 seconds... 00:33:48.809 [2024-07-11 21:40:23.452469] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.809 [2024-07-11 21:40:23.452533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.809 [2024-07-11 21:40:23.452557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.460459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.460506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.460526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.468303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.468340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.468360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.476372] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.476408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.476427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.483970] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.484001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.484019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.491459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.491494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.491514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.499200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.499236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.499256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.506672] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.506706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.506725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.514310] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.514344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.514363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.522275] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.522312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.522332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.530418] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.530456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.530475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.538155] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.538190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.538209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.545624] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.545659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.545678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.553408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.553443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.553462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.561077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.561112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.561131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.568362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.568396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.568415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:48.810 [2024-07-11 21:40:23.575117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:48.810 [2024-07-11 21:40:23.575151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:48.810 [2024-07-11 21:40:23.575170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.582125] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.582156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.582174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.589105] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.589139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.589165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.595974] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.596004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.596022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.602765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.602811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.602827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.609727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.609770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.609791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.616625] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.616658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.616677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.623346] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.623379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.623398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.630459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.630492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.630511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.637347] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.637380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.637399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.644140] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.644173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.644191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.650894] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.650923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.650938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.657702] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.657735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.657761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.664542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.664575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.664593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.671478] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.671511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.671530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.678451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.678484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.678502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.685283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.685315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.685333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.692118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.692151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.692170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.069 [2024-07-11 21:40:23.698910] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.069 [2024-07-11 21:40:23.698938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.069 [2024-07-11 21:40:23.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.705727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.705767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.705807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.712768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.712816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.712832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.719663] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.719697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.719716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.726582] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.726616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.726635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.734214] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.734249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.734268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.741092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.741127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.741146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.748018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.748047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.748062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.755283] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.755316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.755336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.762075] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.762119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.762138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.768957] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.768991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.769009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.775719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.775760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.775781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.782513] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.782545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.782564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.789220] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.789256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.789275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.796243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.796277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.796296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.803147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.803182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.803201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.810175] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.810210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.810229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.816820] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.816850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.816867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.823713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.823746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.823773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.830639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.830671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.830689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.070 [2024-07-11 21:40:23.837313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.070 [2024-07-11 21:40:23.837347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.070 [2024-07-11 21:40:23.837365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.844045] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.844074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.844107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.850788] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.850835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.850852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.857815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.857844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.857860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.864641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.864675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.864695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.871332] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.871364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.871383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.877971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.878001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.878018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.884760] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.884806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.884829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.891604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.891636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.891654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.898427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.898460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.898478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.905187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.905219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.905239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.911903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.911932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.911948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.918710] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.918742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.918772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.925497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.925528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.925546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.932232] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.932264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.932283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.939022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.939050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.939066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.946057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.946101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.946117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.952916] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.952945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.952961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.959666] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.959699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.959718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.329 [2024-07-11 21:40:23.966619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.329 [2024-07-11 21:40:23.966654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.329 [2024-07-11 21:40:23.966674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:23.973577] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:23.973611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:23.973630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:23.980397] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:23.980430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:23.980448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:23.987102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:23.987129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:23.987162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:23.993880] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:23.993909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:23.993925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.000694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.000727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.000761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.007467] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.007500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.007519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.014362] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.014395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.014414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.021173] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.021205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.021223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.027859] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.027886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.027903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.034609] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.034642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.034661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.041448] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.041481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.041500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.048364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.048397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.048416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.055137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.055170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.055189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.061941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.061990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.062007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.068783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.068827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.068844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.075586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.075619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.075637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.082312] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.082344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.082363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.089005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.089032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.089064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.330 [2024-07-11 21:40:24.095679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.330 [2024-07-11 21:40:24.095712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.330 [2024-07-11 21:40:24.095730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.102480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.102509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.102540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.109160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.109192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.109211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.116069] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.116115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.116134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.122976] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.123004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.123019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.129749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.129789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.129808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.136587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.136620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.136638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.143407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.143440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.143459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.150137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.150169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.150187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.156856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.156884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.156899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.163655] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.163686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.163704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.170420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.170453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.170471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.177225] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.177257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.177284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.183995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.184022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.184038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.190803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.190830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.190846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.197695] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.197727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.197746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.204584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.204617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.204635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.211305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.211337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.211355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.218157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.218192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.218211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.224941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.224985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.225002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.231691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.231724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.231742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.238249] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.238294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.238310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.244787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.244832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.244847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.251474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.251506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.251525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.258183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.258215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.258233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.264999] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.265027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.265042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.271903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.271946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.271961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.589 [2024-07-11 21:40:24.278748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.589 [2024-07-11 21:40:24.278801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.589 [2024-07-11 21:40:24.278817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.285611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.285644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.285663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.292352] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.292384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.292408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.299158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.299190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.299208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.305837] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.305864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.305880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.312550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.312582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.312600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.319304] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.319337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.319355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.326160] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.326192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.326210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.332942] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.332969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.332985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.339736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.339776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.339808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.346419] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.346452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.346470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.590 [2024-07-11 21:40:24.353137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.590 [2024-07-11 21:40:24.353175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.590 [2024-07-11 21:40:24.353195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.359822] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.359852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.359869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.366599] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.366631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.366649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.373446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.373478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.373497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.380258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.380290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.380309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.387150] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.387182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.387200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.393906] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.393933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.393948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.400543] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.400575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.400594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.407288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.407320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.407339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.414153] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.414186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.414205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.420875] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.420905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.420936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.427656] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.427690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.427709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.434431] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.434463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.434481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.441245] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.441277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.441296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.448090] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.448137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.448156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.454874] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.454903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.454919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.461542] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.461575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.461592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.468208] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.468243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.468268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.848 [2024-07-11 21:40:24.475026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.848 [2024-07-11 21:40:24.475069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.848 [2024-07-11 21:40:24.475085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.481722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.481764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.481800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.488641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.488675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.488694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.495443] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.495476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.495496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.502240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.502273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.502291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.509021] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.509064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.509080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.515852] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.515896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.515912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.522486] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.522519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.522538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.529185] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.529217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.529236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.536204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.536239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.536258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.542941] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.542970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.542987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.549698] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.549730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.549749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.556421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.556454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.556473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.563172] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.563205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.563223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.569963] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.569991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.570007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.576741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.576780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.576815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.583450] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.583482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.583506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.590190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.590222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.590241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.596983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.597011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.597045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.603740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.603780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.603814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.610509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.610541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.610559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:49.849 [2024-07-11 21:40:24.617216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:49.849 [2024-07-11 21:40:24.617248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:49.849 [2024-07-11 21:40:24.617267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.624073] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.107 [2024-07-11 21:40:24.624117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.107 [2024-07-11 21:40:24.624136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.630870] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.107 [2024-07-11 21:40:24.630898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.107 [2024-07-11 21:40:24.630913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.637635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.107 [2024-07-11 21:40:24.637667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.107 [2024-07-11 21:40:24.637686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.644459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.107 [2024-07-11 21:40:24.644497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.107 [2024-07-11 21:40:24.644516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.651307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.107 [2024-07-11 21:40:24.651339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.107 [2024-07-11 21:40:24.651357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.658118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.107 [2024-07-11 21:40:24.658159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.107 [2024-07-11 21:40:24.658178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.107 [2024-07-11 21:40:24.664836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.664879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.664895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.671630] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.671662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.671680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.678488] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.678520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.678539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.685107] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.685140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.685159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.692053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.692081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.692096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.698950] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.698979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.698996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.705657] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.705690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.705708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.712541] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.712574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.712593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.719415] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.719449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.719468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.726157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.726190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.726208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.732949] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.732978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.732995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.739690] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.739723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.739742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.746440] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.746473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.746491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.753344] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.753376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.753394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.760206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.760239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.760263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.766981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.767010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.767042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.773855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.773884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.773900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.780590] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.780623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.780641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.787387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.787420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.787438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.794133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.794165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.794184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.800918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.800948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.800964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.807701] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.807733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.807759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.814442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.814474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.814492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.821179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.821211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.821229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.827935] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.827965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.827982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.834707] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.834738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.834763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.841388] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.841421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.841439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.848108] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.848140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.848158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.855326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.108 [2024-07-11 21:40:24.855360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.108 [2024-07-11 21:40:24.855380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.108 [2024-07-11 21:40:24.863992] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.109 [2024-07-11 21:40:24.864023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.109 [2024-07-11 21:40:24.864057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.109 [2024-07-11 21:40:24.872673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.109 [2024-07-11 21:40:24.872706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.109 [2024-07-11 21:40:24.872726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.881780] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.881810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.881850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.890954] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.890986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.891003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.900102] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.900137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.900156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.909039] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.909090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.909106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.918133] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.918167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.918196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.927009] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.927055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.927072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.936174] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.936208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.936228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.944975] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.945006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.945043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.953684] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.953718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.953738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.962720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.962773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.962794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.971409] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.971444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.971463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.980331] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.980366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.980386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.990572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.990608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.990627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:24.997603] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:24.997638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:24.997657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.005239] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.005273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.005293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.012980] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.013010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.013028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.020612] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.020647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.020666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.028350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.028385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.028403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.036008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.036038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.036057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.043658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.043693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.043711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.051041] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.051128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.058392] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.058427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.058446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.065682] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.065717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.065736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.072796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.072826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.072843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.080403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.080438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.080457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.367 [2024-07-11 21:40:25.088194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.367 [2024-07-11 21:40:25.088229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.367 [2024-07-11 21:40:25.088249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.368 [2024-07-11 21:40:25.096059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.368 [2024-07-11 21:40:25.096111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.368 [2024-07-11 21:40:25.096129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.368 [2024-07-11 21:40:25.103813] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.368 [2024-07-11 21:40:25.103844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.368 [2024-07-11 21:40:25.103861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.368 [2024-07-11 21:40:25.111046] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.368 [2024-07-11 21:40:25.111081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.368 [2024-07-11 21:40:25.111100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.368 [2024-07-11 21:40:25.118801] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.368 [2024-07-11 21:40:25.118834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.368 [2024-07-11 21:40:25.118851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.368 [2024-07-11 21:40:25.125858] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.368 [2024-07-11 21:40:25.125891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.368 [2024-07-11 21:40:25.125908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.368 [2024-07-11 21:40:25.133129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.368 [2024-07-11 21:40:25.133161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.368 [2024-07-11 21:40:25.133178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.626 [2024-07-11 21:40:25.140806] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.626 [2024-07-11 21:40:25.140838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.626 [2024-07-11 21:40:25.140855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.626 [2024-07-11 21:40:25.148157] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.626 [2024-07-11 21:40:25.148194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.626 [2024-07-11 21:40:25.148213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.626 [2024-07-11 21:40:25.155750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.155814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.155831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.163256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.163292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.163311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.170898] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.170943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.170960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.178740] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.178801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.178820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.186219] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.186253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.186272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.194299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.194334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.194353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.202122] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.202158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.202177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.209733] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.209776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.209810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.217096] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.217133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.217152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.224864] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.224897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.224921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.232771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.232822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.232841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.240280] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.240313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.240330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.247336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.247368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.247385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.254446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.254481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.254500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.262118] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.262154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.262173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.269635] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.269670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.269689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.276713] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.276748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.276781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.284436] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.284471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.284490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.291868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.291904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.291922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.299598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.299633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.299652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.307131] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.307166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.307185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.314633] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.314667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.314687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.322326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.322361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.322380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.330257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.330293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.330312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.338134] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.338169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.338189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.345648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.345682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.345702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.353530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.353566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.353585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.361573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.361609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.361628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.369573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.369608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.369628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.377333] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.377368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.377388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.385023] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.385073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.385093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.627 [2024-07-11 21:40:25.392350] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.627 [2024-07-11 21:40:25.392398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.627 [2024-07-11 21:40:25.392415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.400005] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.400052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.400070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.408003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.408035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.408067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.415640] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.415674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.415693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.423156] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.423189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.423215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.430549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.430582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.430601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.438497] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.438532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.438551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:50.886 [2024-07-11 21:40:25.446235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x157bf10) 00:33:50.886 [2024-07-11 21:40:25.446271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:50.886 [2024-07-11 21:40:25.446290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:50.886 00:33:50.886 Latency(us) 00:33:50.886 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:50.886 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:33:50.886 nvme0n1 : 2.00 4349.58 543.70 0.00 0.00 3674.09 904.15 9709.04 00:33:50.886 =================================================================================================================== 00:33:50.886 Total : 4349.58 543.70 0.00 0.00 3674.09 904.15 9709.04 00:33:50.886 0 00:33:50.886 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:50.886 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:50.886 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:50.886 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:50.886 | .driver_specific 00:33:50.886 | .nvme_error 00:33:50.886 | .status_code 00:33:50.886 | .command_transient_transport_error' 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 280 > 0 )) 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1055916 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1055916 ']' 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1055916 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055916 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055916' 00:33:51.144 killing process with pid 1055916 00:33:51.144 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1055916 00:33:51.144 Received shutdown signal, test time was about 2.000000 seconds 00:33:51.144 00:33:51.144 Latency(us) 00:33:51.144 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:51.144 =================================================================================================================== 00:33:51.144 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:51.145 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1055916 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1056324 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1056324 /var/tmp/bperf.sock 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1056324 ']' 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:51.403 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:51.403 21:40:25 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.403 [2024-07-11 21:40:26.036587] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:51.403 [2024-07-11 21:40:26.036666] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056324 ] 00:33:51.403 EAL: No free 2048 kB hugepages reported on node 1 00:33:51.403 [2024-07-11 21:40:26.099341] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:51.690 [2024-07-11 21:40:26.194045] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:51.690 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:51.690 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:51.690 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:51.690 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:51.948 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:51.948 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:51.948 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:51.948 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:51.948 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:51.948 21:40:26 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:52.513 nvme0n1 00:33:52.514 21:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:33:52.514 21:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:52.514 21:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:52.514 21:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:52.514 21:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:52.514 21:40:27 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:52.514 Running I/O for 2 seconds... 00:33:52.514 [2024-07-11 21:40:27.236188] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.514 [2024-07-11 21:40:27.236493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.514 [2024-07-11 21:40:27.236549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.514 [2024-07-11 21:40:27.249276] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.514 [2024-07-11 21:40:27.249487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14355 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.514 [2024-07-11 21:40:27.249518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.514 [2024-07-11 21:40:27.262708] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.514 [2024-07-11 21:40:27.262926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:25015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.514 [2024-07-11 21:40:27.262956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.514 [2024-07-11 21:40:27.275726] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.514 [2024-07-11 21:40:27.275946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.514 [2024-07-11 21:40:27.275975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.772 [2024-07-11 21:40:27.289415] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.772 [2024-07-11 21:40:27.289631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:1943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.772 [2024-07-11 21:40:27.289661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.772 [2024-07-11 21:40:27.302384] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.772 [2024-07-11 21:40:27.302591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.772 [2024-07-11 21:40:27.302619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.772 [2024-07-11 21:40:27.315591] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.772 [2024-07-11 21:40:27.315830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:6036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.772 [2024-07-11 21:40:27.315858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.772 [2024-07-11 21:40:27.328762] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.772 [2024-07-11 21:40:27.328966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8763 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.772 [2024-07-11 21:40:27.328992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.772 [2024-07-11 21:40:27.341854] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.772 [2024-07-11 21:40:27.342103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.772 [2024-07-11 21:40:27.342136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.772 [2024-07-11 21:40:27.356484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.772 [2024-07-11 21:40:27.356712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16373 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.356743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.371122] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.371378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.371411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.385671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.385950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.385979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.400238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.400490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:15326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.400523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.414764] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.415019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.415048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.429362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.429714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.429746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.444081] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.444381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11351 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.444414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.458694] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.458951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.458980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.473215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.473448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:22181 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.473475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.487783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.488080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.488113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.502195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.502441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:23247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.502474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.516868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.517122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21891 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.517156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:52.773 [2024-07-11 21:40:27.531361] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:52.773 [2024-07-11 21:40:27.531538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18286 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:52.773 [2024-07-11 21:40:27.531566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.546031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.546272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21454 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.546299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.560740] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.561015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21166 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.561067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.575098] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.575381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.575414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.589615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.589892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6592 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.589919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.604095] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.604339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.604371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.618707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.618925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.618955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.633087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.633373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:23235 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.633406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.647654] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.647842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:21704 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.647885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.662271] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.662546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:14975 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.662579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.676783] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.677023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:19228 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.677069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.691229] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.691485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.691517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.705677] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.705978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11743 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.706006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.720363] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.720670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.720703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.734987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.735199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.735245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.749586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.749872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.749899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.764043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.764328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:1684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.764362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.778596] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.778841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8334 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.778870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.032 [2024-07-11 21:40:27.793162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.032 [2024-07-11 21:40:27.793416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.032 [2024-07-11 21:40:27.793449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.807745] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.807981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:19543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.808009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.822020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.822260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:13984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.822291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.835923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.836193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:21080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.836224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.850168] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.850370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9470 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.850416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.864472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.864721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.864759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.878923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.879189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18929 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.879220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.893571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.893841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13045 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.893870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.908073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.908253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:1195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.908282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.922735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.922996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:9486 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.923024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.937130] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.937342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:8863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.937394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.951825] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.952072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:6971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.952104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.966307] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.966562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:18935 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.966594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.980928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.981179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:17269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.981225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:27.995442] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:27.995684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15097 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:27.995716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:28.010004] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:28.010294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:11698 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:28.010327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:28.024385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:28.024613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:5252 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:28.024657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:28.039020] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:28.039266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:24530 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:28.039299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.290 [2024-07-11 21:40:28.053571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.290 [2024-07-11 21:40:28.053834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.290 [2024-07-11 21:40:28.053863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.068256] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.068529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.068557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.082851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.083085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.083115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.097368] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.097598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.097630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.111791] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.112066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22729 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.112097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.126534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.126828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:14271 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.126857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.140942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.141198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:16357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.141231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.155586] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.155820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.155847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.169948] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.547 [2024-07-11 21:40:28.170207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17863 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.547 [2024-07-11 21:40:28.170239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.547 [2024-07-11 21:40:28.184495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.184733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.184775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.199114] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.199365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:17902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.199397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.213603] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.213885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.213914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.228385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.228647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8572 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.228679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.242896] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.243205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.243235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.257373] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.257628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.257661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.271988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.272243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:6508 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.272273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.286449] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.286707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:4391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.286740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.301040] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.301291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:7557 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.301323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.548 [2024-07-11 21:40:28.315414] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.548 [2024-07-11 21:40:28.315659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.548 [2024-07-11 21:40:28.315695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.330269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.330518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.330551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.345000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.345256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:11824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.345288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.358989] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.359345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:1214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.359377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.373175] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.373441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:4009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.373472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.387471] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.387722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:9404 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.387764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.401897] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.402157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:11316 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.402189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.416485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.416716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.416751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.430950] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.431254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:16553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.431286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.445718] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.445993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:19976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.446022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.460349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.460643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:13055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.460674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.475021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.475286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.475316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.489513] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.489779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.489825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.504164] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.504416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:13064 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.504446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.518852] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.519047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.519075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.533566] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.533849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:12567 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.533879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.548086] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.548340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:14421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.548372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:53.805 [2024-07-11 21:40:28.562710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:53.805 [2024-07-11 21:40:28.563005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:53.805 [2024-07-11 21:40:28.563049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.061 [2024-07-11 21:40:28.577362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.061 [2024-07-11 21:40:28.577591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.061 [2024-07-11 21:40:28.577619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.061 [2024-07-11 21:40:28.591973] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.061 [2024-07-11 21:40:28.592280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.061 [2024-07-11 21:40:28.592312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.061 [2024-07-11 21:40:28.606670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.061 [2024-07-11 21:40:28.606946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:10968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.061 [2024-07-11 21:40:28.606978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.061 [2024-07-11 21:40:28.621215] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.061 [2024-07-11 21:40:28.621470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.061 [2024-07-11 21:40:28.621501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.061 [2024-07-11 21:40:28.635970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.061 [2024-07-11 21:40:28.636239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.061 [2024-07-11 21:40:28.636271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.061 [2024-07-11 21:40:28.650775] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.651067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:25584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.651098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.665472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.665700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:24837 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.665731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.680210] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.680388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:3520 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.680433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.694514] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.694741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:3169 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.694799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.709073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.709327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:15930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.709358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.723549] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.723761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.723792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.738201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.738546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:7813 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.738577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.752701] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.752948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.767257] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.767492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20261 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.767523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.781891] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.782173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:10519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.782204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.796500] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.796762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.796807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.810763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.810969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:24618 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.810997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.062 [2024-07-11 21:40:28.824963] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.062 [2024-07-11 21:40:28.825218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:24282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.062 [2024-07-11 21:40:28.825249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.839681] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.839954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20563 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.839981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.854176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.854437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.854468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.868577] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.868814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:14949 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.868842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.882964] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.883239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:21809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.883271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.897475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.897775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:20965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.897806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.912046] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.912384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.912416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.926332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.926581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:2689 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.926612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.940421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.940697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:1968 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.940726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.954615] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.954846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:17522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.954877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.968710] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.968946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.968977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.982928] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.983145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.983171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:28.997386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:28.997645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:11152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:28.997672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:29.012029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:29.012353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:20429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:29.012380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:29.026597] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:29.026876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:17153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:29.026904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:29.041311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:29.041532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:21118 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:29.041559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:29.055748] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:29.056023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:2999 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:29.056059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:29.070144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:29.070409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:25336 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:29.070441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.318 [2024-07-11 21:40:29.084523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.318 [2024-07-11 21:40:29.084764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:23294 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.318 [2024-07-11 21:40:29.084791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.099238] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.099492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:6119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.099533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.113784] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.114041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:6846 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.114068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.128311] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.128614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:4648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.128658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.142534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.142778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:25449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.142806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.156905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.157110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:9792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.157137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.171477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.171780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.171823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.185824] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.186094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17600 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.186122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.200386] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.200629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.200655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.214981] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.215228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2500 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.215255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 [2024-07-11 21:40:29.229671] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4c40) with pdu=0x2000190fef90 00:33:54.574 [2024-07-11 21:40:29.229919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13880 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:33:54.574 [2024-07-11 21:40:29.229947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:33:54.574 00:33:54.574 Latency(us) 00:33:54.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.574 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:33:54.574 nvme0n1 : 2.01 17687.17 69.09 0.00 0.00 7219.17 3252.53 14951.92 00:33:54.574 =================================================================================================================== 00:33:54.574 Total : 17687.17 69.09 0.00 0.00 7219.17 3252.53 14951.92 00:33:54.575 0 00:33:54.575 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:54.575 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:54.575 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:54.575 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:54.575 | .driver_specific 00:33:54.575 | .nvme_error 00:33:54.575 | .status_code 00:33:54.575 | .command_transient_transport_error' 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 139 > 0 )) 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1056324 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1056324 ']' 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1056324 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1056324 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1056324' 00:33:54.831 killing process with pid 1056324 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1056324 00:33:54.831 Received shutdown signal, test time was about 2.000000 seconds 00:33:54.831 00:33:54.831 Latency(us) 00:33:54.831 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:54.831 =================================================================================================================== 00:33:54.831 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:54.831 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1056324 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=1056735 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 1056735 /var/tmp/bperf.sock 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 1056735 ']' 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:33:55.088 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:33:55.088 21:40:29 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.088 [2024-07-11 21:40:29.789388] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:33:55.088 [2024-07-11 21:40:29.789467] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1056735 ] 00:33:55.088 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:55.088 Zero copy mechanism will not be used. 00:33:55.088 EAL: No free 2048 kB hugepages reported on node 1 00:33:55.088 [2024-07-11 21:40:29.851158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:55.344 [2024-07-11 21:40:29.938572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:33:55.344 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:33:55.344 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:33:55.344 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:55.344 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:33:55.600 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:33:55.600 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:55.600 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:55.600 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:55.600 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:55.600 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:33:56.163 nvme0n1 00:33:56.163 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:33:56.163 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:33:56.163 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:56.163 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:33:56.163 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:33:56.163 21:40:30 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:33:56.420 I/O size of 131072 is greater than zero copy threshold (65536). 00:33:56.420 Zero copy mechanism will not be used. 00:33:56.420 Running I/O for 2 seconds... 00:33:56.420 [2024-07-11 21:40:30.947038] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.947442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.947484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.954538] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.954887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.954918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.961660] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.962024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.962083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.969479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.969879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.969909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.977426] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.977815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.977844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.984749] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.985105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.985138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.991851] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.992195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.992234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:30.998936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:30.999276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:30.999307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.006320] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.006680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.006712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.014073] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.014409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.014441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.022045] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.022410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.022442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.029921] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.030258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.030289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.036770] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.036986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.037015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.043879] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.044209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.044241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.052176] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.052527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.052559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.061104] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.061496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.061527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.068936] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.069277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.069309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.075493] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.075845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.075873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.082388] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.082722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.082762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.089657] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.090010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.090039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.096314] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.096676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.096707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.103385] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.103717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.103748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.110191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.110528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.110559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.117301] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.117643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.117676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.125413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.125764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.125808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.132814] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.133137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.133168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.140805] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.141177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.141222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.147071] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.147373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.147401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.153106] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.153422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.153450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.159417] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.159718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.159745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.165817] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.166140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.166169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.172323] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.172637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.172667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.179105] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.179456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.179492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.420 [2024-07-11 21:40:31.186389] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.420 [2024-07-11 21:40:31.186771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.420 [2024-07-11 21:40:31.186817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.677 [2024-07-11 21:40:31.193485] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.677 [2024-07-11 21:40:31.193842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.677 [2024-07-11 21:40:31.193872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.677 [2024-07-11 21:40:31.200539] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.677 [2024-07-11 21:40:31.200925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.677 [2024-07-11 21:40:31.200954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.677 [2024-07-11 21:40:31.207763] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.677 [2024-07-11 21:40:31.208106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.677 [2024-07-11 21:40:31.208151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.677 [2024-07-11 21:40:31.214621] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.677 [2024-07-11 21:40:31.214929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.677 [2024-07-11 21:40:31.214958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.677 [2024-07-11 21:40:31.221025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.677 [2024-07-11 21:40:31.221354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.677 [2024-07-11 21:40:31.221383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.677 [2024-07-11 21:40:31.226792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.677 [2024-07-11 21:40:31.227109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.677 [2024-07-11 21:40:31.227137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.232821] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.233129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.233158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.239279] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.239594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.239623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.245707] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.246014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.246042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.252158] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.252534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.252575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.259228] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.259562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.259591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.266201] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.266584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.266614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.273024] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.273371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.273401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.281191] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.281513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.281543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.289380] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.289718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.289776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.297970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.298319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.298370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.306178] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.306516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.306562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.314327] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.314665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.314711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.322220] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.322541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.322572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.330475] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.330835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.330864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.338725] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.339080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.339108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.346766] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.347084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.347112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.354633] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.354954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.354982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.362472] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.362781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.362809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.370492] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.370805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.370833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.377990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.378305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.378333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.385650] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.385961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.385989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.392195] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.392496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.392524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.399971] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.400290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.400318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.407025] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.407323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.407351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.413144] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.413452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.413478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.419790] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.420107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.420151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.426912] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.427269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.427310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.433008] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.433309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.433337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.678 [2024-07-11 21:40:31.438974] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.678 [2024-07-11 21:40:31.439275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.678 [2024-07-11 21:40:31.439302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.679 [2024-07-11 21:40:31.445241] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.679 [2024-07-11 21:40:31.445543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.679 [2024-07-11 21:40:31.445571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.936 [2024-07-11 21:40:31.451537] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.936 [2024-07-11 21:40:31.451873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.936 [2024-07-11 21:40:31.451901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.936 [2024-07-11 21:40:31.457544] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.936 [2024-07-11 21:40:31.457890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.457920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.463832] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.464132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.464160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.470162] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.470461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.470489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.476378] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.476746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.476797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.482888] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.482994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.483028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.489553] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.489870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.489898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.495413] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.495708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.495737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.501052] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.501364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.501393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.506724] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.507059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.507089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.512716] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.513046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.513075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.519205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.519562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.519591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.525600] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.525913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.525942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.531976] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.532284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.532313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.538236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.538537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.538566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.544559] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.544875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.544903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.550938] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.551243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.551272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.557261] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.557564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.557593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.563140] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.563428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.563458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.569570] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.569872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.569901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.575742] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.576037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.576082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.582054] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.582347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.582376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.588445] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.588734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.588773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.594906] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.595203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.595233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.601502] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.601813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.601841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.607960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.608258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.608288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.614528] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.614836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.614864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.620878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.621169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.621199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.626691] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.626989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.627017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.633484] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.633805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.937 [2024-07-11 21:40:31.633833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.937 [2024-07-11 21:40:31.639464] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.937 [2024-07-11 21:40:31.639770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.639801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.645462] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.645770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.645818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.651495] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.651817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.651846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.657625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.658002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.658031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.664278] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.664576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.664607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.670859] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.671156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.671187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.677735] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.678032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.678078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.684284] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.684571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.684601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.690509] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.690818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.690846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.696768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.697077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.697107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:56.938 [2024-07-11 21:40:31.703019] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:56.938 [2024-07-11 21:40:31.703296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:56.938 [2024-07-11 21:40:31.703324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.708918] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.709218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.709251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.714659] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.714956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.714986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.720253] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.720532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.720561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.726200] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.726482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.726512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.732676] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.732972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.733001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.738970] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.739263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.739293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.744625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.744927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.744956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.750997] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.751325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.751355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.758269] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.758622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.758651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.765830] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.766121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.766151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.773290] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.773708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.773740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.781031] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.781374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.781404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.788700] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.789072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.789101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.796587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.796969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.796997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.803913] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.804251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.804278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.811172] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.811533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.811561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.818399] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.818688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.818725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.825234] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.825576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.196 [2024-07-11 21:40:31.825604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.196 [2024-07-11 21:40:31.832372] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.196 [2024-07-11 21:40:31.832738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.832773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.839878] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.840263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.840290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.847087] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.847475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.847517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.854438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.854850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.854879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.861923] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.862310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.862338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.869343] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.869694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.869722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.876792] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.877156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.877184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.884116] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.884476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.884504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.891431] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.891735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.891771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.899043] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.899439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.899481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.905782] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.906053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.906081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.911987] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.912297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.912339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.917935] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.918225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.918253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.923387] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.923656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.923699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.930067] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.930427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.930456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.937488] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.937874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.937910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.945308] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.945444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.945473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.953027] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.953435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.953464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.197 [2024-07-11 21:40:31.961357] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.197 [2024-07-11 21:40:31.961659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.197 [2024-07-11 21:40:31.961690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:31.968446] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:31.968879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:31.968909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:31.976084] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:31.976441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:31.976471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:31.983868] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:31.984219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:31.984249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:31.991576] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:31.991970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:31.991999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:31.999332] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:31.999718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:31.999748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:32.007230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:32.007639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:32.007669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:32.015403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:32.015745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.455 [2024-07-11 21:40:32.015798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.455 [2024-07-11 21:40:32.021768] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.455 [2024-07-11 21:40:32.022055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.022097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.027680] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.027973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.028001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.034011] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.034307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.034337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.040862] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.041159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.041192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.048029] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.048368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.048399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.056076] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.056474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.056505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.063902] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.064203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.064234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.070648] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.071018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.071064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.078534] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.078972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.078999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.086593] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.086905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.086934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.093319] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.093621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.093652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.099990] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.100298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.100329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.107097] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.107419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.107450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.114017] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.114350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.114381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.121689] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.122090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.122122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.129960] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.130343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.130379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.138266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.138683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.138714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.146605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.147000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.147042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.154530] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.154848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.154877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.162362] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.162705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.162736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.170715] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.171019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.171064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.456 [2024-07-11 21:40:32.178744] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.456 [2024-07-11 21:40:32.179178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.456 [2024-07-11 21:40:32.179208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.457 [2024-07-11 21:40:32.186968] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.457 [2024-07-11 21:40:32.187376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.457 [2024-07-11 21:40:32.187408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.457 [2024-07-11 21:40:32.195310] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.457 [2024-07-11 21:40:32.195702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.457 [2024-07-11 21:40:32.195733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.457 [2024-07-11 21:40:32.203632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.457 [2024-07-11 21:40:32.204037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.457 [2024-07-11 21:40:32.204083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.457 [2024-07-11 21:40:32.212187] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.457 [2024-07-11 21:40:32.212561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.457 [2024-07-11 21:40:32.212594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.457 [2024-07-11 21:40:32.220365] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.457 [2024-07-11 21:40:32.220745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.457 [2024-07-11 21:40:32.220784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.228873] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.229285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.229317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.237349] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.237762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.237794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.245644] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.246069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.246100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.253884] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.254288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.254319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.261942] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.262180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.262211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.269890] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.270300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.270332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.278305] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.278695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.278726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.286955] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.287365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.287397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.295079] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.295466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.295498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.303072] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.303492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.303523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.311625] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.311989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.312018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.319789] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.320150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.320181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.328064] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.328483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.328514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.336511] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.336892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.336921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.344602] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.344984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.345017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.352903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.353325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.353355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.361266] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.361705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.361737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.369403] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.369817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.369845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.377605] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.378006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.378050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.385957] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.386373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.386404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.394325] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.394731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.394770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.402867] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.403268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.715 [2024-07-11 21:40:32.403300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.715 [2024-07-11 21:40:32.410954] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.715 [2024-07-11 21:40:32.411362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.411393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.419303] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.419680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.419712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.427567] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.427980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.428023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.436021] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.436463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.436495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.444479] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.444891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.444920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.451819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.452115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.452146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.457988] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.458293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.458324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.464529] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.464847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.464877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.470571] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.470895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.470923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.716 [2024-07-11 21:40:32.477983] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.716 [2024-07-11 21:40:32.478433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.716 [2024-07-11 21:40:32.478470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.974 [2024-07-11 21:40:32.485666] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.974 [2024-07-11 21:40:32.485988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.974 [2024-07-11 21:40:32.486018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.974 [2024-07-11 21:40:32.493845] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.974 [2024-07-11 21:40:32.494234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.974 [2024-07-11 21:40:32.494266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.974 [2024-07-11 21:40:32.502190] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.974 [2024-07-11 21:40:32.502536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.974 [2024-07-11 21:40:32.502568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.509905] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.510294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.510327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.518258] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.518644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.518675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.526579] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.527018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.527047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.534799] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.535161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.535192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.542702] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.543022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.543050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.551289] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.551594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.551626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.559523] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.559894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.559922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.567476] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.567833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.567861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.574517] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.574887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.574915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.582022] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.582385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.582412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.589712] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.590085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.590129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.597487] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.597869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.597897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.605039] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.605368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.605396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.612750] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.613115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.613143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.620397] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.620737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.620771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.628230] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.628535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.628564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.635765] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.636132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.636160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.643461] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.643739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.643776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.650903] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.651209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.651237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.658526] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.658820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.658849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.666236] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.666576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.666604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.673940] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.674268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.674295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.681557] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.681896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.681930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.689151] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.689513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.689541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.697042] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.697283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.697312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.703434] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.703675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.703703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.709006] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.709247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.709275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.714587] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.975 [2024-07-11 21:40:32.714838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.975 [2024-07-11 21:40:32.714867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.975 [2024-07-11 21:40:32.719819] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.976 [2024-07-11 21:40:32.720061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.976 [2024-07-11 21:40:32.720090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:57.976 [2024-07-11 21:40:32.725382] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.976 [2024-07-11 21:40:32.725621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.976 [2024-07-11 21:40:32.725649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:57.976 [2024-07-11 21:40:32.730865] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.976 [2024-07-11 21:40:32.731108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.976 [2024-07-11 21:40:32.731135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:57.976 [2024-07-11 21:40:32.736705] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.976 [2024-07-11 21:40:32.736961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.976 [2024-07-11 21:40:32.736990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:57.976 [2024-07-11 21:40:32.742640] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:57.976 [2024-07-11 21:40:32.742892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:57.976 [2024-07-11 21:40:32.742920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.748205] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.748447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.748475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.754083] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.754377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.754405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.760139] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.760380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.760407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.765811] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.766097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.766124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.773477] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.773839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.773867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.779594] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.779844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.779872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.785531] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.785778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.785807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.791333] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.791591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.791619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.798401] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.798748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.798784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.805632] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.805926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.805954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.813262] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.813592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.813620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.820312] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.820650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.820678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.827953] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.828312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.828341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.835421] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.835700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.835728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.842941] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.843286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.843314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.850717] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.851087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.851122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.858670] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.858989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.859018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.866297] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.866612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.866641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.874009] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.874376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.874404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.881457] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.881713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.881742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.889000] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.889363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.889391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.896438] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.234 [2024-07-11 21:40:32.896744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.234 [2024-07-11 21:40:32.896781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.234 [2024-07-11 21:40:32.903119] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.235 [2024-07-11 21:40:32.903400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.235 [2024-07-11 21:40:32.903428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.235 [2024-07-11 21:40:32.909982] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.235 [2024-07-11 21:40:32.910319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.235 [2024-07-11 21:40:32.910348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.235 [2024-07-11 21:40:32.917612] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.235 [2024-07-11 21:40:32.917925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.235 [2024-07-11 21:40:32.917954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:33:58.235 [2024-07-11 21:40:32.925243] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.235 [2024-07-11 21:40:32.925559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.235 [2024-07-11 21:40:32.925587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:33:58.235 [2024-07-11 21:40:32.932795] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.235 [2024-07-11 21:40:32.933137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.235 [2024-07-11 21:40:32.933166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:33:58.235 [2024-07-11 21:40:32.940581] tcp.c:2067:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1cb4f80) with pdu=0x2000190fef90 00:33:58.235 [2024-07-11 21:40:32.940905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:33:58.235 [2024-07-11 21:40:32.940934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:33:58.235 00:33:58.235 Latency(us) 00:33:58.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.235 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:33:58.235 nvme0n1 : 2.00 4306.76 538.34 0.00 0.00 3705.72 2487.94 9223.59 00:33:58.235 =================================================================================================================== 00:33:58.235 Total : 4306.76 538.34 0.00 0.00 3705.72 2487.94 9223.59 00:33:58.235 0 00:33:58.235 21:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:33:58.235 21:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:33:58.235 21:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:33:58.235 | .driver_specific 00:33:58.235 | .nvme_error 00:33:58.235 | .status_code 00:33:58.235 | .command_transient_transport_error' 00:33:58.235 21:40:32 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 278 > 0 )) 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 1056735 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1056735 ']' 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1056735 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1056735 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1056735' 00:33:58.491 killing process with pid 1056735 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1056735 00:33:58.491 Received shutdown signal, test time was about 2.000000 seconds 00:33:58.491 00:33:58.491 Latency(us) 00:33:58.491 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:58.491 =================================================================================================================== 00:33:58.491 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:33:58.491 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1056735 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 1055371 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 1055371 ']' 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 1055371 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1055371 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1055371' 00:33:58.748 killing process with pid 1055371 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 1055371 00:33:58.748 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 1055371 00:33:59.005 00:33:59.005 real 0m15.569s 00:33:59.005 user 0m30.945s 00:33:59.005 sys 0m4.186s 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:33:59.005 ************************************ 00:33:59.005 END TEST nvmf_digest_error 00:33:59.005 ************************************ 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:33:59.005 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:33:59.005 rmmod nvme_tcp 00:33:59.005 rmmod nvme_fabrics 00:33:59.005 rmmod nvme_keyring 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 1055371 ']' 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 1055371 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 1055371 ']' 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 1055371 00:33:59.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1055371) - No such process 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 1055371 is not found' 00:33:59.262 Process with pid 1055371 is not found 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:33:59.262 21:40:33 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.160 21:40:35 nvmf_tcp.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:01.160 00:34:01.160 real 0m34.850s 00:34:01.160 user 1m1.635s 00:34:01.160 sys 0m9.630s 00:34:01.160 21:40:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:01.160 21:40:35 nvmf_tcp.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:34:01.160 ************************************ 00:34:01.160 END TEST nvmf_digest 00:34:01.160 ************************************ 00:34:01.160 21:40:35 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:01.160 21:40:35 nvmf_tcp -- nvmf/nvmf.sh@111 -- # [[ 0 -eq 1 ]] 00:34:01.160 21:40:35 nvmf_tcp -- nvmf/nvmf.sh@116 -- # [[ 0 -eq 1 ]] 00:34:01.160 21:40:35 nvmf_tcp -- nvmf/nvmf.sh@121 -- # [[ phy == phy ]] 00:34:01.160 21:40:35 nvmf_tcp -- nvmf/nvmf.sh@122 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:01.160 21:40:35 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:01.160 21:40:35 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:01.160 21:40:35 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:01.160 ************************************ 00:34:01.160 START TEST nvmf_bdevperf 00:34:01.160 ************************************ 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:34:01.160 * Looking for test storage... 00:34:01.160 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:01.160 21:40:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:34:01.161 21:40:35 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:03.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:03.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:03.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:03.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:03.688 21:40:37 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:03.688 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:03.688 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:03.688 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:03.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:03.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:34:03.688 00:34:03.688 --- 10.0.0.2 ping statistics --- 00:34:03.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.688 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:34:03.688 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:03.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:03.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:34:03.689 00:34:03.689 --- 10.0.0.1 ping statistics --- 00:34:03.689 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:03.689 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1059115 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1059115 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1059115 ']' 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:03.689 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.689 [2024-07-11 21:40:38.107266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:03.689 [2024-07-11 21:40:38.107360] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:03.689 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.689 [2024-07-11 21:40:38.175153] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:03.689 [2024-07-11 21:40:38.262737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:03.689 [2024-07-11 21:40:38.262819] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:03.689 [2024-07-11 21:40:38.262847] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:03.689 [2024-07-11 21:40:38.262863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:03.689 [2024-07-11 21:40:38.262873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:03.689 [2024-07-11 21:40:38.262961] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:03.689 [2024-07-11 21:40:38.263026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:03.689 [2024-07-11 21:40:38.263029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.689 [2024-07-11 21:40:38.408456] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.689 Malloc0 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.689 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:03.947 [2024-07-11 21:40:38.466123] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:03.947 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:03.948 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:03.948 { 00:34:03.948 "params": { 00:34:03.948 "name": "Nvme$subsystem", 00:34:03.948 "trtype": "$TEST_TRANSPORT", 00:34:03.948 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:03.948 "adrfam": "ipv4", 00:34:03.948 "trsvcid": "$NVMF_PORT", 00:34:03.948 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:03.948 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:03.948 "hdgst": ${hdgst:-false}, 00:34:03.948 "ddgst": ${ddgst:-false} 00:34:03.948 }, 00:34:03.948 "method": "bdev_nvme_attach_controller" 00:34:03.948 } 00:34:03.948 EOF 00:34:03.948 )") 00:34:03.948 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:03.948 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:03.948 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:03.948 21:40:38 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:03.948 "params": { 00:34:03.948 "name": "Nvme1", 00:34:03.948 "trtype": "tcp", 00:34:03.948 "traddr": "10.0.0.2", 00:34:03.948 "adrfam": "ipv4", 00:34:03.948 "trsvcid": "4420", 00:34:03.948 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:03.948 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:03.948 "hdgst": false, 00:34:03.948 "ddgst": false 00:34:03.948 }, 00:34:03.948 "method": "bdev_nvme_attach_controller" 00:34:03.948 }' 00:34:03.948 [2024-07-11 21:40:38.515402] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:03.948 [2024-07-11 21:40:38.515469] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059222 ] 00:34:03.948 EAL: No free 2048 kB hugepages reported on node 1 00:34:03.948 [2024-07-11 21:40:38.575351] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:03.948 [2024-07-11 21:40:38.663911] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:04.206 Running I/O for 1 seconds... 00:34:05.140 00:34:05.140 Latency(us) 00:34:05.140 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:05.140 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:05.140 Verification LBA range: start 0x0 length 0x4000 00:34:05.140 Nvme1n1 : 1.01 8628.79 33.71 0.00 0.00 14772.07 3021.94 14272.28 00:34:05.140 =================================================================================================================== 00:34:05.140 Total : 8628.79 33.71 0.00 0.00 14772.07 3021.94 14272.28 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=1059371 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:34:05.398 { 00:34:05.398 "params": { 00:34:05.398 "name": "Nvme$subsystem", 00:34:05.398 "trtype": "$TEST_TRANSPORT", 00:34:05.398 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:05.398 "adrfam": "ipv4", 00:34:05.398 "trsvcid": "$NVMF_PORT", 00:34:05.398 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:05.398 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:05.398 "hdgst": ${hdgst:-false}, 00:34:05.398 "ddgst": ${ddgst:-false} 00:34:05.398 }, 00:34:05.398 "method": "bdev_nvme_attach_controller" 00:34:05.398 } 00:34:05.398 EOF 00:34:05.398 )") 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:34:05.398 21:40:40 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:34:05.398 "params": { 00:34:05.398 "name": "Nvme1", 00:34:05.398 "trtype": "tcp", 00:34:05.398 "traddr": "10.0.0.2", 00:34:05.398 "adrfam": "ipv4", 00:34:05.398 "trsvcid": "4420", 00:34:05.398 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:05.398 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:05.398 "hdgst": false, 00:34:05.398 "ddgst": false 00:34:05.398 }, 00:34:05.398 "method": "bdev_nvme_attach_controller" 00:34:05.398 }' 00:34:05.398 [2024-07-11 21:40:40.156191] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:05.398 [2024-07-11 21:40:40.156287] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1059371 ] 00:34:05.656 EAL: No free 2048 kB hugepages reported on node 1 00:34:05.656 [2024-07-11 21:40:40.218296] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:05.656 [2024-07-11 21:40:40.304696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:05.914 Running I/O for 15 seconds... 00:34:08.444 21:40:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 1059115 00:34:08.444 21:40:43 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:34:08.444 [2024-07-11 21:40:43.129641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:43104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.444 [2024-07-11 21:40:43.129694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.444 [2024-07-11 21:40:43.129728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:43112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.444 [2024-07-11 21:40:43.129749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.444 [2024-07-11 21:40:43.129779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:43120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.444 [2024-07-11 21:40:43.129812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.444 [2024-07-11 21:40:43.129830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:43128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.444 [2024-07-11 21:40:43.129845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.129862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:43136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.445 [2024-07-11 21:40:43.129879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.129896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:43144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.445 [2024-07-11 21:40:43.129911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.129927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.129941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.129960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.129976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.129993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:43152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.445 [2024-07-11 21:40:43.130490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:42296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:42304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:42312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:42328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:42360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:42368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.130980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.130996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:42384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:42392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:42400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:42408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:42416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:42424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:42432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:42448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:42456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:42464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:42472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:42488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:42504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:42512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:42528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:42536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:42544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:42552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:42560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:42568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.445 [2024-07-11 21:40:43.131826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:42576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.445 [2024-07-11 21:40:43.131840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.131855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.131869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.131888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.131902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.131918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:42600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.131932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.131946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.131960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.131975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:42616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.131989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:42624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:42640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:42648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:42656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:42680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:42688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:42712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:42720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:42728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:42736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:42744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:42760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:42784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:42792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:42800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:42816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:42824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:42832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:42840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.132982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.132997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:42856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:42864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:42880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:42888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:42896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:42904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.446 [2024-07-11 21:40:43.133241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:43160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.446 [2024-07-11 21:40:43.133275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.446 [2024-07-11 21:40:43.133292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:43168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:08.447 [2024-07-11 21:40:43.133309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:42912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:42920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:42928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:42944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:42952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:42976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:42992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:43000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:43008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:43016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:43024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:43032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:43040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:43048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:43056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.133971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:43064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.133985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.134000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:43072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.134018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.134057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:43080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.134074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.134092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:43088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:08.447 [2024-07-11 21:40:43.134108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.134124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1041050 is same with the state(5) to be set 00:34:08.447 [2024-07-11 21:40:43.134143] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:34:08.447 [2024-07-11 21:40:43.134157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:34:08.447 [2024-07-11 21:40:43.134172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:43096 len:8 PRP1 0x0 PRP2 0x0 00:34:08.447 [2024-07-11 21:40:43.134186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:08.447 [2024-07-11 21:40:43.134260] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1041050 was disconnected and freed. reset controller. 00:34:08.447 [2024-07-11 21:40:43.138161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.447 [2024-07-11 21:40:43.138238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.447 [2024-07-11 21:40:43.138962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.447 [2024-07-11 21:40:43.138992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.447 [2024-07-11 21:40:43.139008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.447 [2024-07-11 21:40:43.139263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.447 [2024-07-11 21:40:43.139509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.447 [2024-07-11 21:40:43.139532] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.447 [2024-07-11 21:40:43.139553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.447 [2024-07-11 21:40:43.143153] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.447 [2024-07-11 21:40:43.152479] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.447 [2024-07-11 21:40:43.152920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.447 [2024-07-11 21:40:43.152953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.447 [2024-07-11 21:40:43.152971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.447 [2024-07-11 21:40:43.153210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.447 [2024-07-11 21:40:43.153452] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.447 [2024-07-11 21:40:43.153477] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.447 [2024-07-11 21:40:43.153494] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.447 [2024-07-11 21:40:43.157097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.447 [2024-07-11 21:40:43.166429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.447 [2024-07-11 21:40:43.166844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.447 [2024-07-11 21:40:43.166877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.447 [2024-07-11 21:40:43.166895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.447 [2024-07-11 21:40:43.167134] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.447 [2024-07-11 21:40:43.167378] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.447 [2024-07-11 21:40:43.167403] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.447 [2024-07-11 21:40:43.167419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.447 [2024-07-11 21:40:43.171012] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.447 [2024-07-11 21:40:43.180318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.447 [2024-07-11 21:40:43.180747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.447 [2024-07-11 21:40:43.180796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.447 [2024-07-11 21:40:43.180814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.447 [2024-07-11 21:40:43.181078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.447 [2024-07-11 21:40:43.181321] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.447 [2024-07-11 21:40:43.181346] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.447 [2024-07-11 21:40:43.181362] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.447 [2024-07-11 21:40:43.184958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.447 [2024-07-11 21:40:43.194282] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.447 [2024-07-11 21:40:43.194694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.448 [2024-07-11 21:40:43.194726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.448 [2024-07-11 21:40:43.194744] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.448 [2024-07-11 21:40:43.194991] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.448 [2024-07-11 21:40:43.195236] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.448 [2024-07-11 21:40:43.195260] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.448 [2024-07-11 21:40:43.195277] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.448 [2024-07-11 21:40:43.198873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.448 [2024-07-11 21:40:43.208192] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.448 [2024-07-11 21:40:43.208622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.448 [2024-07-11 21:40:43.208654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.448 [2024-07-11 21:40:43.208672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.448 [2024-07-11 21:40:43.208927] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.448 [2024-07-11 21:40:43.209171] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.448 [2024-07-11 21:40:43.209196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.448 [2024-07-11 21:40:43.209212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.448 [2024-07-11 21:40:43.212839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.710 [2024-07-11 21:40:43.222224] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.710 [2024-07-11 21:40:43.222609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.710 [2024-07-11 21:40:43.222642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.710 [2024-07-11 21:40:43.222660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.710 [2024-07-11 21:40:43.222912] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.710 [2024-07-11 21:40:43.223156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.710 [2024-07-11 21:40:43.223181] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.710 [2024-07-11 21:40:43.223198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.710 [2024-07-11 21:40:43.226790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.710 [2024-07-11 21:40:43.236119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.710 [2024-07-11 21:40:43.236580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.710 [2024-07-11 21:40:43.236612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.710 [2024-07-11 21:40:43.236631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.710 [2024-07-11 21:40:43.236879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.710 [2024-07-11 21:40:43.237123] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.710 [2024-07-11 21:40:43.237148] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.710 [2024-07-11 21:40:43.237165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.710 [2024-07-11 21:40:43.240763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.710 [2024-07-11 21:40:43.250120] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.710 [2024-07-11 21:40:43.250547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.710 [2024-07-11 21:40:43.250575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.710 [2024-07-11 21:40:43.250590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.710 [2024-07-11 21:40:43.250839] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.710 [2024-07-11 21:40:43.251082] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.710 [2024-07-11 21:40:43.251108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.710 [2024-07-11 21:40:43.251130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.710 [2024-07-11 21:40:43.254719] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.710 [2024-07-11 21:40:43.264058] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.710 [2024-07-11 21:40:43.264476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.710 [2024-07-11 21:40:43.264508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.710 [2024-07-11 21:40:43.264526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.710 [2024-07-11 21:40:43.264776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.710 [2024-07-11 21:40:43.265030] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.710 [2024-07-11 21:40:43.265054] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.710 [2024-07-11 21:40:43.265070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.710 [2024-07-11 21:40:43.268662] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.710 [2024-07-11 21:40:43.277986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.710 [2024-07-11 21:40:43.278370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.710 [2024-07-11 21:40:43.278402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.710 [2024-07-11 21:40:43.278421] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.710 [2024-07-11 21:40:43.278660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.710 [2024-07-11 21:40:43.278915] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.710 [2024-07-11 21:40:43.278939] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.710 [2024-07-11 21:40:43.278956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.710 [2024-07-11 21:40:43.282534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.710 [2024-07-11 21:40:43.291850] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.710 [2024-07-11 21:40:43.292252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.710 [2024-07-11 21:40:43.292284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.710 [2024-07-11 21:40:43.292302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.710 [2024-07-11 21:40:43.292541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.710 [2024-07-11 21:40:43.292796] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.710 [2024-07-11 21:40:43.292821] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.710 [2024-07-11 21:40:43.292837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.710 [2024-07-11 21:40:43.296420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.305734] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.306175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.306207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.306225] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.306464] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.306707] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.306732] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.306748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.310353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.319682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.320121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.320153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.320171] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.320410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.320653] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.320679] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.320695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.324311] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.333625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.334070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.334102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.334120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.334360] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.334604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.334628] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.334644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.338246] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.347571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.348054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.348087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.348105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.348345] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.348594] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.348619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.348635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.352228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.361531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.361946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.361979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.361998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.362238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.362484] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.362509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.362526] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.366125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.375442] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.375825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.375859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.375877] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.376117] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.376360] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.376385] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.376402] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.380002] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.389316] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.389692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.389721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.389737] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.389982] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.390226] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.390252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.390268] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.393869] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.403179] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.403684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.403738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.403767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.404009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.404252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.404277] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.404293] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.407889] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.417203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.417611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.417643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.417662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.417913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.418158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.418183] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.418199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.421789] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.431101] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.431514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.431545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.431563] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.431814] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.711 [2024-07-11 21:40:43.432057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.711 [2024-07-11 21:40:43.432083] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.711 [2024-07-11 21:40:43.432099] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.711 [2024-07-11 21:40:43.435683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.711 [2024-07-11 21:40:43.445000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.711 [2024-07-11 21:40:43.445417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.711 [2024-07-11 21:40:43.445449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.711 [2024-07-11 21:40:43.445473] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.711 [2024-07-11 21:40:43.445712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.712 [2024-07-11 21:40:43.445967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.712 [2024-07-11 21:40:43.445994] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.712 [2024-07-11 21:40:43.446010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.712 [2024-07-11 21:40:43.449595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.712 [2024-07-11 21:40:43.458929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.712 [2024-07-11 21:40:43.459311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.712 [2024-07-11 21:40:43.459344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.712 [2024-07-11 21:40:43.459362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.712 [2024-07-11 21:40:43.459601] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.712 [2024-07-11 21:40:43.459857] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.712 [2024-07-11 21:40:43.459883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.712 [2024-07-11 21:40:43.459900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.712 [2024-07-11 21:40:43.463486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:08.712 [2024-07-11 21:40:43.472818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:08.712 [2024-07-11 21:40:43.473201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:08.712 [2024-07-11 21:40:43.473235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:08.712 [2024-07-11 21:40:43.473253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:08.712 [2024-07-11 21:40:43.473498] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:08.712 [2024-07-11 21:40:43.473747] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:08.712 [2024-07-11 21:40:43.473787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:08.712 [2024-07-11 21:40:43.473804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:08.712 [2024-07-11 21:40:43.477424] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.006 [2024-07-11 21:40:43.486884] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.006 [2024-07-11 21:40:43.487322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-11 21:40:43.487358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.006 [2024-07-11 21:40:43.487378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.006 [2024-07-11 21:40:43.487629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.006 [2024-07-11 21:40:43.487900] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.006 [2024-07-11 21:40:43.487933] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.006 [2024-07-11 21:40:43.487955] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.006 [2024-07-11 21:40:43.491727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.006 [2024-07-11 21:40:43.501135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.006 [2024-07-11 21:40:43.501567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.006 [2024-07-11 21:40:43.501603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.006 [2024-07-11 21:40:43.501625] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.006 [2024-07-11 21:40:43.501889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.006 [2024-07-11 21:40:43.502150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.006 [2024-07-11 21:40:43.502179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.502198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.505959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.515071] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.515481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.515514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.515533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.515784] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.516028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.516053] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.516070] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.519657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.528985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.529393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.529426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.529445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.529685] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.529944] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.529970] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.529987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.533573] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.542905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.543321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.543354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.543372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.543612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.543870] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.543897] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.543914] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.547497] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.556842] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.557266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.557299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.557317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.557557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.557813] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.557839] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.557856] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.561438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.570767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.571163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.571195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.571214] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.571453] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.571697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.571722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.571738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.575336] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.584650] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.585067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.585099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.585117] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.585363] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.585606] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.585632] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.585648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.589242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.598551] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.598970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.599002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.599020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.599259] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.599503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.599528] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.599545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.603141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.612449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.612829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.612862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.612880] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.613120] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.613363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.613388] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.613405] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.617026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.626337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.626717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.626749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.626780] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.627020] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.627263] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.627289] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.627310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.630904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.640229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.640614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.640646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.640664] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.640915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.641160] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.007 [2024-07-11 21:40:43.641184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.007 [2024-07-11 21:40:43.641200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.007 [2024-07-11 21:40:43.644797] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.007 [2024-07-11 21:40:43.654123] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.007 [2024-07-11 21:40:43.654529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.007 [2024-07-11 21:40:43.654561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.007 [2024-07-11 21:40:43.654579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.007 [2024-07-11 21:40:43.654832] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.007 [2024-07-11 21:40:43.655078] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.655102] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.655117] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.658708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.668055] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.668443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.668475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.668494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.668733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.668990] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.669015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.669040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.672630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.681992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.682377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.682408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.682426] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.682666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.682924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.682949] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.682966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.686556] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.695901] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.696309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.696341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.696359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.696598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.696853] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.696878] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.696894] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.700485] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.709824] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.710239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.710270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.710288] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.710526] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.710784] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.710816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.710832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.714421] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.723767] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.724187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.724219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.724237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.724481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.724726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.724751] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.724780] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.728377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.737707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.738135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.738168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.738187] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.738426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.738670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.738694] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.738710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.008 [2024-07-11 21:40:43.742314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.008 [2024-07-11 21:40:43.751917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.008 [2024-07-11 21:40:43.752402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.008 [2024-07-11 21:40:43.752438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.008 [2024-07-11 21:40:43.752458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.008 [2024-07-11 21:40:43.752711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.008 [2024-07-11 21:40:43.752984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.008 [2024-07-11 21:40:43.753013] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.008 [2024-07-11 21:40:43.753030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.270 [2024-07-11 21:40:43.756698] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.270 [2024-07-11 21:40:43.765835] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.270 [2024-07-11 21:40:43.766245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.270 [2024-07-11 21:40:43.766277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.270 [2024-07-11 21:40:43.766296] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.270 [2024-07-11 21:40:43.766536] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.270 [2024-07-11 21:40:43.766792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.270 [2024-07-11 21:40:43.766817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.270 [2024-07-11 21:40:43.766833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.270 [2024-07-11 21:40:43.770433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.270 [2024-07-11 21:40:43.779773] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.270 [2024-07-11 21:40:43.780178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.270 [2024-07-11 21:40:43.780210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.270 [2024-07-11 21:40:43.780228] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.270 [2024-07-11 21:40:43.780471] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.270 [2024-07-11 21:40:43.780715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.270 [2024-07-11 21:40:43.780740] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.270 [2024-07-11 21:40:43.780768] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.270 [2024-07-11 21:40:43.784379] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.270 [2024-07-11 21:40:43.793722] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.270 [2024-07-11 21:40:43.794116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.270 [2024-07-11 21:40:43.794149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.270 [2024-07-11 21:40:43.794167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.270 [2024-07-11 21:40:43.794406] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.270 [2024-07-11 21:40:43.794650] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.270 [2024-07-11 21:40:43.794675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.270 [2024-07-11 21:40:43.794691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.270 [2024-07-11 21:40:43.798285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.270 [2024-07-11 21:40:43.807634] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.270 [2024-07-11 21:40:43.808060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.270 [2024-07-11 21:40:43.808094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.270 [2024-07-11 21:40:43.808114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.270 [2024-07-11 21:40:43.808353] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.270 [2024-07-11 21:40:43.808595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.270 [2024-07-11 21:40:43.808620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.270 [2024-07-11 21:40:43.808636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.270 [2024-07-11 21:40:43.812232] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.270 [2024-07-11 21:40:43.821555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.270 [2024-07-11 21:40:43.821972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.270 [2024-07-11 21:40:43.822005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.270 [2024-07-11 21:40:43.822029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.270 [2024-07-11 21:40:43.822270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.270 [2024-07-11 21:40:43.822515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.270 [2024-07-11 21:40:43.822541] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.270 [2024-07-11 21:40:43.822557] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.270 [2024-07-11 21:40:43.826161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.270 [2024-07-11 21:40:43.835481] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.835897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.835929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.835948] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.836189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.836434] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.836459] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.836475] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.840071] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.849388] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.849805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.849840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.849858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.850098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.850342] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.850367] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.850384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.853982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.863298] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.863724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.863766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.863786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.864026] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.864275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.864301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.864318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.867920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.877237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.877650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.877683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.877701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.877953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.878197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.878223] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.878239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.881832] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.891148] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.891530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.891562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.891579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.891830] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.892074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.892099] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.892116] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.895696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.905005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.905397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.905429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.905448] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.905687] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.905940] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.905967] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.905983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.909566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.918887] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.919306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.919339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.919357] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.919597] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.919854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.919881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.919898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.923478] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.932788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.933198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.933230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.933249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.933489] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.933734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.933769] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.933788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.937372] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.946672] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.947092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.947125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.947143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.947382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.947626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.947651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.947668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.951261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.960569] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.960982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.961014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.961037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.961278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.961522] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.961547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.271 [2024-07-11 21:40:43.961564] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.271 [2024-07-11 21:40:43.965156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.271 [2024-07-11 21:40:43.974467] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.271 [2024-07-11 21:40:43.974848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.271 [2024-07-11 21:40:43.974880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.271 [2024-07-11 21:40:43.974899] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.271 [2024-07-11 21:40:43.975138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.271 [2024-07-11 21:40:43.975381] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.271 [2024-07-11 21:40:43.975407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.272 [2024-07-11 21:40:43.975423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.272 [2024-07-11 21:40:43.979016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.272 [2024-07-11 21:40:43.988544] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.272 [2024-07-11 21:40:43.988942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.272 [2024-07-11 21:40:43.988976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.272 [2024-07-11 21:40:43.988994] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.272 [2024-07-11 21:40:43.989235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.272 [2024-07-11 21:40:43.989480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.272 [2024-07-11 21:40:43.989506] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.272 [2024-07-11 21:40:43.989522] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.272 [2024-07-11 21:40:43.993116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.272 [2024-07-11 21:40:44.002423] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.272 [2024-07-11 21:40:44.002816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.272 [2024-07-11 21:40:44.002850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.272 [2024-07-11 21:40:44.002869] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.272 [2024-07-11 21:40:44.003109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.272 [2024-07-11 21:40:44.003354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.272 [2024-07-11 21:40:44.003380] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.272 [2024-07-11 21:40:44.003403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.272 [2024-07-11 21:40:44.006998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.272 [2024-07-11 21:40:44.016303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.272 [2024-07-11 21:40:44.016688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.272 [2024-07-11 21:40:44.016720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.272 [2024-07-11 21:40:44.016738] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.272 [2024-07-11 21:40:44.016987] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.272 [2024-07-11 21:40:44.017231] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.272 [2024-07-11 21:40:44.017256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.272 [2024-07-11 21:40:44.017272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.272 [2024-07-11 21:40:44.020864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.272 [2024-07-11 21:40:44.030172] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.272 [2024-07-11 21:40:44.030579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.272 [2024-07-11 21:40:44.030612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.272 [2024-07-11 21:40:44.030631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.272 [2024-07-11 21:40:44.030902] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.272 [2024-07-11 21:40:44.031148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.272 [2024-07-11 21:40:44.031174] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.272 [2024-07-11 21:40:44.031191] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.272 [2024-07-11 21:40:44.034801] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.044165] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.044633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.044666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.044684] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.044943] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.045191] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.045217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.045234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.048835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.058137] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.058552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.058586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.058605] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.058856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.059100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.059125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.059141] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.062724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.072039] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.072460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.072493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.072511] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.072749] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.073004] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.073030] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.073046] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.076626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.085942] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.086350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.086383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.086401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.086640] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.086895] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.086921] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.086937] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.090521] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.099836] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.100244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.100276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.100295] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.100539] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.100794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.100820] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.100837] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.104423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.113736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.114134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.114166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.114185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.114424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.114668] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.114692] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.114708] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.118298] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.127608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.128095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.128128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.128146] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.128385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.128629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.128654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.128671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.132270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.141583] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.141963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.141995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.534 [2024-07-11 21:40:44.142014] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.534 [2024-07-11 21:40:44.142253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.534 [2024-07-11 21:40:44.142496] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.534 [2024-07-11 21:40:44.142521] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.534 [2024-07-11 21:40:44.142537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.534 [2024-07-11 21:40:44.146134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.534 [2024-07-11 21:40:44.155678] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.534 [2024-07-11 21:40:44.156099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.534 [2024-07-11 21:40:44.156132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.156150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.156389] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.156634] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.156659] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.156674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.160265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.169581] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.169977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.170009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.170027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.170266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.170511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.170536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.170551] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.174144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.183452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.183849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.183890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.183909] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.184151] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.184395] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.184420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.184436] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.188029] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.197371] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.197782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.197820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.197839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.198080] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.198324] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.198349] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.198365] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.201958] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.211262] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.211674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.211706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.211725] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.211974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.212218] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.212244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.212260] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.215849] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.225151] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.225535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.225567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.225585] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.225837] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.226081] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.226107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.226124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.229704] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.239013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.239420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.239452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.239470] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.239709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.239969] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.239996] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.240013] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.243598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.252910] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.253334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.253367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.253385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.253624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.253878] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.253905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.253921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.257503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.266818] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.267239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.267272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.267290] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.267531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.267786] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.267812] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.267828] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.271408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.280708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.281142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.281174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.281193] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.281432] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.281675] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.281701] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.281717] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.285307] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.535 [2024-07-11 21:40:44.294619] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.535 [2024-07-11 21:40:44.295008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.535 [2024-07-11 21:40:44.295041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.535 [2024-07-11 21:40:44.295060] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.535 [2024-07-11 21:40:44.295301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.535 [2024-07-11 21:40:44.295546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.535 [2024-07-11 21:40:44.295571] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.535 [2024-07-11 21:40:44.295588] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.535 [2024-07-11 21:40:44.299195] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.797 [2024-07-11 21:40:44.308571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.797 [2024-07-11 21:40:44.308992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.797 [2024-07-11 21:40:44.309026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.797 [2024-07-11 21:40:44.309045] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.797 [2024-07-11 21:40:44.309284] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.797 [2024-07-11 21:40:44.309535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.797 [2024-07-11 21:40:44.309562] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.797 [2024-07-11 21:40:44.309578] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.797 [2024-07-11 21:40:44.313171] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.797 [2024-07-11 21:40:44.322483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.797 [2024-07-11 21:40:44.322898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.797 [2024-07-11 21:40:44.322931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.797 [2024-07-11 21:40:44.322950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.797 [2024-07-11 21:40:44.323189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.797 [2024-07-11 21:40:44.323432] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.797 [2024-07-11 21:40:44.323458] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.797 [2024-07-11 21:40:44.323474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.797 [2024-07-11 21:40:44.327069] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.797 [2024-07-11 21:40:44.336377] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.797 [2024-07-11 21:40:44.336790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.797 [2024-07-11 21:40:44.336823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.797 [2024-07-11 21:40:44.336846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.797 [2024-07-11 21:40:44.337086] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.797 [2024-07-11 21:40:44.337331] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.797 [2024-07-11 21:40:44.337356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.797 [2024-07-11 21:40:44.337373] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.797 [2024-07-11 21:40:44.340968] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.797 [2024-07-11 21:40:44.350279] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.797 [2024-07-11 21:40:44.350699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.797 [2024-07-11 21:40:44.350731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.350749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.351000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.351244] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.351269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.351285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.354873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.364182] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.364592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.364623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.364642] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.364891] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.365137] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.365161] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.365178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.368772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.378083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.378466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.378498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.378516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.378766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.379011] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.379036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.379057] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.382640] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.391962] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.392372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.392405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.392423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.392662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.392916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.392943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.392959] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.396544] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.405857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.406249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.406281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.406299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.406538] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.406795] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.406827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.406843] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.410426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.419761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.420183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.420216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.420234] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.420473] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.420716] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.420742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.420766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.424352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.433675] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.434096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.434128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.434147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.434386] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.434629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.434654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.434670] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.438257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.447562] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.447978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.448010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.448029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.448268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.448512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.448537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.448553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.452144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.461456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.461847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.461880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.461898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.462137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.462383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.462407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.462424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.466016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.475328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.475748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.475787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.475805] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.476054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.476300] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.476325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.476340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.479932] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.489249] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.489614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.489647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.489666] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.489918] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.490163] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.490189] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.490205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.493815] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.503125] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.503525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.503559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.798 [2024-07-11 21:40:44.503577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.798 [2024-07-11 21:40:44.503827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.798 [2024-07-11 21:40:44.504071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.798 [2024-07-11 21:40:44.504095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.798 [2024-07-11 21:40:44.504111] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.798 [2024-07-11 21:40:44.507692] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.798 [2024-07-11 21:40:44.517163] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.798 [2024-07-11 21:40:44.517573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.798 [2024-07-11 21:40:44.517607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.799 [2024-07-11 21:40:44.517626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.799 [2024-07-11 21:40:44.517876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.799 [2024-07-11 21:40:44.518122] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.799 [2024-07-11 21:40:44.518147] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.799 [2024-07-11 21:40:44.518170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.799 [2024-07-11 21:40:44.521759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.799 [2024-07-11 21:40:44.531068] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.799 [2024-07-11 21:40:44.531451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.799 [2024-07-11 21:40:44.531484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.799 [2024-07-11 21:40:44.531502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.799 [2024-07-11 21:40:44.531741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.799 [2024-07-11 21:40:44.531994] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.799 [2024-07-11 21:40:44.532020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.799 [2024-07-11 21:40:44.532036] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.799 [2024-07-11 21:40:44.535622] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.799 [2024-07-11 21:40:44.544939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.799 [2024-07-11 21:40:44.545343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.799 [2024-07-11 21:40:44.545375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.799 [2024-07-11 21:40:44.545393] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.799 [2024-07-11 21:40:44.545632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.799 [2024-07-11 21:40:44.545885] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.799 [2024-07-11 21:40:44.545912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.799 [2024-07-11 21:40:44.545928] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.799 [2024-07-11 21:40:44.549513] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:09.799 [2024-07-11 21:40:44.558826] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:09.799 [2024-07-11 21:40:44.559237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:09.799 [2024-07-11 21:40:44.559268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:09.799 [2024-07-11 21:40:44.559287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:09.799 [2024-07-11 21:40:44.559525] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:09.799 [2024-07-11 21:40:44.559780] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:09.799 [2024-07-11 21:40:44.559807] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:09.799 [2024-07-11 21:40:44.559823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:09.799 [2024-07-11 21:40:44.563423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.572815] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.573234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.059 [2024-07-11 21:40:44.573272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.059 [2024-07-11 21:40:44.573292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.059 [2024-07-11 21:40:44.573532] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.059 [2024-07-11 21:40:44.573788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.059 [2024-07-11 21:40:44.573815] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.059 [2024-07-11 21:40:44.573831] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.059 [2024-07-11 21:40:44.577444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.586758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.587172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.059 [2024-07-11 21:40:44.587204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.059 [2024-07-11 21:40:44.587222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.059 [2024-07-11 21:40:44.587461] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.059 [2024-07-11 21:40:44.587704] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.059 [2024-07-11 21:40:44.587730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.059 [2024-07-11 21:40:44.587746] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.059 [2024-07-11 21:40:44.591341] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.600664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.601103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.059 [2024-07-11 21:40:44.601136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.059 [2024-07-11 21:40:44.601154] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.059 [2024-07-11 21:40:44.601393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.059 [2024-07-11 21:40:44.601636] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.059 [2024-07-11 21:40:44.601661] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.059 [2024-07-11 21:40:44.601677] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.059 [2024-07-11 21:40:44.605270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.614582] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.614982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.059 [2024-07-11 21:40:44.615016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.059 [2024-07-11 21:40:44.615035] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.059 [2024-07-11 21:40:44.615274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.059 [2024-07-11 21:40:44.615524] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.059 [2024-07-11 21:40:44.615550] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.059 [2024-07-11 21:40:44.615566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.059 [2024-07-11 21:40:44.619160] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.628466] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.628895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.059 [2024-07-11 21:40:44.628928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.059 [2024-07-11 21:40:44.628946] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.059 [2024-07-11 21:40:44.629186] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.059 [2024-07-11 21:40:44.629429] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.059 [2024-07-11 21:40:44.629454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.059 [2024-07-11 21:40:44.629469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.059 [2024-07-11 21:40:44.633062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.642399] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.642750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.059 [2024-07-11 21:40:44.642791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.059 [2024-07-11 21:40:44.642811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.059 [2024-07-11 21:40:44.643050] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.059 [2024-07-11 21:40:44.643296] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.059 [2024-07-11 21:40:44.643322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.059 [2024-07-11 21:40:44.643338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.059 [2024-07-11 21:40:44.646930] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.059 [2024-07-11 21:40:44.656451] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.059 [2024-07-11 21:40:44.656872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.656904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.656923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.657162] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.657405] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.657430] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.657446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.661037] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.670350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.670733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.670773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.670792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.671032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.671275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.671300] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.671317] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.674906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.684216] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.684634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.684667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.684685] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.684935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.685179] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.685204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.685221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.688811] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.698114] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.698545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.698577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.698595] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.698846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.699090] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.699115] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.699131] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.702714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.712024] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.712431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.712464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.712489] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.712730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.712986] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.713012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.713029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.716613] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.725930] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.726344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.726376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.726394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.726634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.726889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.726915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.726931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.730511] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.739820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.740227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.740259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.740277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.740516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.740768] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.740794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.740811] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.744394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.753699] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.754115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.754147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.754165] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.754404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.754647] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.754678] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.754694] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.758289] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.767597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.768005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.768038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.768056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.768295] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.768539] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.768565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.768581] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.772174] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.781480] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.781906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.781939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.781956] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.782196] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.782438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.782464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.782481] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.786072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.795382] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.060 [2024-07-11 21:40:44.795782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.060 [2024-07-11 21:40:44.795816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.060 [2024-07-11 21:40:44.795834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.060 [2024-07-11 21:40:44.796074] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.060 [2024-07-11 21:40:44.796319] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.060 [2024-07-11 21:40:44.796343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.060 [2024-07-11 21:40:44.796360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.060 [2024-07-11 21:40:44.799954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.060 [2024-07-11 21:40:44.809283] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.061 [2024-07-11 21:40:44.809671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.061 [2024-07-11 21:40:44.809713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.061 [2024-07-11 21:40:44.809731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.061 [2024-07-11 21:40:44.809978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.061 [2024-07-11 21:40:44.810223] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.061 [2024-07-11 21:40:44.810247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.061 [2024-07-11 21:40:44.810264] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.061 [2024-07-11 21:40:44.813856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.061 [2024-07-11 21:40:44.823173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.061 [2024-07-11 21:40:44.823558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.061 [2024-07-11 21:40:44.823591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.061 [2024-07-11 21:40:44.823609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.061 [2024-07-11 21:40:44.823860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.061 [2024-07-11 21:40:44.824105] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.061 [2024-07-11 21:40:44.824137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.061 [2024-07-11 21:40:44.824153] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.061 [2024-07-11 21:40:44.827773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.322 [2024-07-11 21:40:44.837217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.322 [2024-07-11 21:40:44.837616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-11 21:40:44.837649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.322 [2024-07-11 21:40:44.837668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.322 [2024-07-11 21:40:44.837917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.322 [2024-07-11 21:40:44.838162] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.322 [2024-07-11 21:40:44.838187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.322 [2024-07-11 21:40:44.838204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.322 [2024-07-11 21:40:44.841800] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.322 [2024-07-11 21:40:44.851116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.322 [2024-07-11 21:40:44.851532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.322 [2024-07-11 21:40:44.851564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.322 [2024-07-11 21:40:44.851582] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.322 [2024-07-11 21:40:44.851838] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.852083] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.852108] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.852124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.855711] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.865030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.865434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.865466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.865484] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.865723] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.865983] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.866009] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.866026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.869612] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.878939] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.879345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.879377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.879395] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.879634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.879889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.879915] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.879931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.883517] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.892838] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.893219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.893251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.893269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.893509] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.893762] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.893787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.893809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.897397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.906729] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.907154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.907186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.907205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.907444] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.907688] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.907713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.907729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.911322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.920645] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.921044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.921076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.921094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.921333] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.921578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.921602] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.921618] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.925215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.934535] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.934937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.934970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.934988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.935228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.935472] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.935496] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.935512] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.939109] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.948417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.948834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.948872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.948891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.949131] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.323 [2024-07-11 21:40:44.949376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.323 [2024-07-11 21:40:44.949402] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.323 [2024-07-11 21:40:44.949418] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.323 [2024-07-11 21:40:44.953007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.323 [2024-07-11 21:40:44.962336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.323 [2024-07-11 21:40:44.962759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.323 [2024-07-11 21:40:44.962800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.323 [2024-07-11 21:40:44.962818] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.323 [2024-07-11 21:40:44.963058] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:44.963302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:44.963328] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:44.963344] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:44.966938] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:44.976270] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:44.976652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:44.976684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:44.976702] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:44.976954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:44.977198] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:44.977224] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:44.977240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:44.980839] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:44.990166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:44.990576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:44.990608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:44.990626] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:44.990879] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:44.991128] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:44.991154] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:44.991170] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:44.994761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:45.004070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:45.004478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:45.004510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:45.004528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:45.004779] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:45.005023] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:45.005048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:45.005064] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:45.008648] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:45.017984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:45.018396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:45.018428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:45.018446] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:45.018686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:45.018943] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:45.018969] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:45.018986] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:45.022570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:45.031886] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:45.032298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:45.032330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:45.032348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:45.032587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:45.032844] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:45.032870] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:45.032887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:45.036470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:45.045795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:45.046224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:45.046256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:45.046274] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:45.046513] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:45.046769] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:45.046794] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:45.046810] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:45.050394] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:45.059707] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:45.060124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:45.060156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:45.060176] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:45.060415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:45.060660] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:45.060686] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:45.060702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:45.064301] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.324 [2024-07-11 21:40:45.073617] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.324 [2024-07-11 21:40:45.074042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.324 [2024-07-11 21:40:45.074075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.324 [2024-07-11 21:40:45.074094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.324 [2024-07-11 21:40:45.074334] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.324 [2024-07-11 21:40:45.074586] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.324 [2024-07-11 21:40:45.074612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.324 [2024-07-11 21:40:45.074628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.324 [2024-07-11 21:40:45.078229] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.325 [2024-07-11 21:40:45.087563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.325 [2024-07-11 21:40:45.087970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.325 [2024-07-11 21:40:45.088002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.325 [2024-07-11 21:40:45.088026] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.325 [2024-07-11 21:40:45.088272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.325 [2024-07-11 21:40:45.088526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.325 [2024-07-11 21:40:45.088553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.325 [2024-07-11 21:40:45.088570] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.092207] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.101555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.101974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.102007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.102025] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.102264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.102507] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.102533] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.102549] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.106146] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.115456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.115885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.115917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.115936] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.116175] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.116417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.116442] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.116459] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.120056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.129380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.129797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.129835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.129854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.130093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.130338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.130369] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.130387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.133979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.143304] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.143716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.143749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.143776] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.144022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.144267] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.144293] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.144310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.147903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.157237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.157655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.157688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.157706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.157957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.158200] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.158226] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.158242] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.161846] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.171413] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.171837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.171869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.171888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.172127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.172370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.172395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.172411] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.176016] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.185336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.185761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.185794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.185812] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.186051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.186294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.186319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.186336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.189931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.199243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.199725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.199765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.199785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.200025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.200268] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.200294] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.200310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.203902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.213213] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.213730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.213788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.213806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.214045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.214288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.214314] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.214329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.587 [2024-07-11 21:40:45.217923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.587 [2024-07-11 21:40:45.227243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.587 [2024-07-11 21:40:45.227721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.587 [2024-07-11 21:40:45.227762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.587 [2024-07-11 21:40:45.227783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.587 [2024-07-11 21:40:45.228029] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.587 [2024-07-11 21:40:45.228272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.587 [2024-07-11 21:40:45.228297] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.587 [2024-07-11 21:40:45.228313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.231903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.241217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.241730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.241793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.241811] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.242051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.242294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.242319] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.242335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.245925] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.255240] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.255650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.255683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.255701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.255954] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.256208] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.256233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.256250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.259841] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.269156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.269561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.269594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.269612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.269866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.270109] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.270135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.270156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.273744] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.283063] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.283493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.283525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.283543] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.283796] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.284042] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.284067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.284084] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.287668] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.296981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.297391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.297424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.297442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.297681] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.297938] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.297965] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.297981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.301566] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.310888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.311294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.311326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.311344] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.311583] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.311839] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.311866] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.311882] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.315466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.324788] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.325219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.325258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.325277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.325516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.325773] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.325798] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.325815] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.329397] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.338705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.339131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.339164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.339182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.339420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.339664] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.339689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.339706] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.588 [2024-07-11 21:40:45.343302] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.588 [2024-07-11 21:40:45.352644] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.588 [2024-07-11 21:40:45.353060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.588 [2024-07-11 21:40:45.353093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.588 [2024-07-11 21:40:45.353112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.588 [2024-07-11 21:40:45.353352] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.588 [2024-07-11 21:40:45.353604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.588 [2024-07-11 21:40:45.353631] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.588 [2024-07-11 21:40:45.353647] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.357274] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.366625] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.367055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.848 [2024-07-11 21:40:45.367087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.848 [2024-07-11 21:40:45.367106] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.848 [2024-07-11 21:40:45.367346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.848 [2024-07-11 21:40:45.367595] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.848 [2024-07-11 21:40:45.367620] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.848 [2024-07-11 21:40:45.367636] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.371236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.380556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.380957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.848 [2024-07-11 21:40:45.380990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.848 [2024-07-11 21:40:45.381009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.848 [2024-07-11 21:40:45.381249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.848 [2024-07-11 21:40:45.381495] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.848 [2024-07-11 21:40:45.381520] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.848 [2024-07-11 21:40:45.381536] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.385132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.394441] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.394836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.848 [2024-07-11 21:40:45.394869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.848 [2024-07-11 21:40:45.394887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.848 [2024-07-11 21:40:45.395127] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.848 [2024-07-11 21:40:45.395372] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.848 [2024-07-11 21:40:45.395398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.848 [2024-07-11 21:40:45.395415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.399007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.408337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.408804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.848 [2024-07-11 21:40:45.408837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.848 [2024-07-11 21:40:45.408855] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.848 [2024-07-11 21:40:45.409095] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.848 [2024-07-11 21:40:45.409338] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.848 [2024-07-11 21:40:45.409363] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.848 [2024-07-11 21:40:45.409379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.412973] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.422290] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.422704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.848 [2024-07-11 21:40:45.422736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.848 [2024-07-11 21:40:45.422765] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.848 [2024-07-11 21:40:45.423018] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.848 [2024-07-11 21:40:45.423261] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.848 [2024-07-11 21:40:45.423286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.848 [2024-07-11 21:40:45.423302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.426893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.436200] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.436607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.848 [2024-07-11 21:40:45.436639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.848 [2024-07-11 21:40:45.436657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.848 [2024-07-11 21:40:45.436907] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.848 [2024-07-11 21:40:45.437152] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.848 [2024-07-11 21:40:45.437177] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.848 [2024-07-11 21:40:45.437193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.848 [2024-07-11 21:40:45.440785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.848 [2024-07-11 21:40:45.450091] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.848 [2024-07-11 21:40:45.450477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.450509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.450528] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.450781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.451025] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.451050] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.451067] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.454652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.463969] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.464350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.464382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.464406] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.464646] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.464902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.464928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.464945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.468533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.477854] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.478261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.478293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.478311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.478550] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.478806] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.478832] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.478847] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.482430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.491745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.492162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.492194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.492213] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.492452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.492697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.492722] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.492738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.496331] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.505643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.506104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.506136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.506155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.506395] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.506640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.506671] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.506688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.510283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.519611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.520039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.520091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.520110] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.520350] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.520593] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.520619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.520635] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.524243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.533564] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.533979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.534011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.534030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.534269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.534512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.534537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.534553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.538304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.547623] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.548058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.548114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.548133] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.548372] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.548615] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.548639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.548655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.552249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.561580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.562032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.562064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.562082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.562321] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.562564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.562589] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.562605] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.566215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.575555] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.575978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.576011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.576030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.576269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.576512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.576537] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.576554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.849 [2024-07-11 21:40:45.580141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.849 [2024-07-11 21:40:45.589468] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.849 [2024-07-11 21:40:45.589893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.849 [2024-07-11 21:40:45.589926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.849 [2024-07-11 21:40:45.589944] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.849 [2024-07-11 21:40:45.590184] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.849 [2024-07-11 21:40:45.590428] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.849 [2024-07-11 21:40:45.590453] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.849 [2024-07-11 21:40:45.590469] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.850 [2024-07-11 21:40:45.594065] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.850 [2024-07-11 21:40:45.603397] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.850 [2024-07-11 21:40:45.603808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.850 [2024-07-11 21:40:45.603841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.850 [2024-07-11 21:40:45.603859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:10.850 [2024-07-11 21:40:45.604104] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:10.850 [2024-07-11 21:40:45.604348] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:10.850 [2024-07-11 21:40:45.604372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:10.850 [2024-07-11 21:40:45.604388] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:10.850 [2024-07-11 21:40:45.607974] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:10.850 [2024-07-11 21:40:45.617315] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:10.850 [2024-07-11 21:40:45.617733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:10.850 [2024-07-11 21:40:45.617774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:10.850 [2024-07-11 21:40:45.617794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.111 [2024-07-11 21:40:45.618039] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.111 [2024-07-11 21:40:45.618294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.111 [2024-07-11 21:40:45.618321] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.111 [2024-07-11 21:40:45.618337] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.111 [2024-07-11 21:40:45.621931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.111 [2024-07-11 21:40:45.631254] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.111 [2024-07-11 21:40:45.631809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.111 [2024-07-11 21:40:45.631843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.111 [2024-07-11 21:40:45.631861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.111 [2024-07-11 21:40:45.632108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.111 [2024-07-11 21:40:45.632352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.111 [2024-07-11 21:40:45.632377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.111 [2024-07-11 21:40:45.632394] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.111 [2024-07-11 21:40:45.635979] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.111 [2024-07-11 21:40:45.645299] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.645678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.645711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.645729] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.645976] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.646220] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.646244] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.646266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.649866] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.659190] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.659598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.659631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.659649] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.659904] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.660149] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.660175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.660192] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.663790] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.673135] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.673562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.673594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.673613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.673867] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.674112] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.674138] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.674154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.677740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.687070] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.687452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.687484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.687502] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.687741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.687998] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.688023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.688040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.691626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.700961] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.701374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.701411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.701430] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.701669] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.701924] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.701950] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.701966] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.705554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.714888] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.715304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.715336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.715354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.715594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.715850] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.715877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.715893] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.719477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.728792] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.729176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.729209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.729227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.729467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.729710] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.729736] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.729764] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.733350] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.742660] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.743084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.743116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.743134] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.743373] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.112 [2024-07-11 21:40:45.743623] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.112 [2024-07-11 21:40:45.743648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.112 [2024-07-11 21:40:45.743665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.112 [2024-07-11 21:40:45.747261] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.112 [2024-07-11 21:40:45.756570] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.112 [2024-07-11 21:40:45.756959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.112 [2024-07-11 21:40:45.756991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.112 [2024-07-11 21:40:45.757010] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.112 [2024-07-11 21:40:45.757249] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.757492] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.757518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.757534] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.761135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.770447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.770831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.770865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.770883] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.771124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.771370] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.771395] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.771412] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.775007] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.784319] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.784703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.784736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.784766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.785009] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.785254] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.785280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.785296] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.788895] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.798203] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.798611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.798643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.798661] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.798914] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.799158] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.799184] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.799200] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.802791] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.812104] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.812508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.812541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.812559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.812811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.813055] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.813080] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.813097] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.816682] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.826000] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.826413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.826445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.826463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.826702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.826959] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.826985] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.827002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.830584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.839897] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.840335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.840368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.840391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.840631] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.840888] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.840914] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.840931] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.844514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.853841] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.854218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.854250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.854268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.854508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.854751] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.854787] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.113 [2024-07-11 21:40:45.854803] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.113 [2024-07-11 21:40:45.858387] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.113 [2024-07-11 21:40:45.867702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.113 [2024-07-11 21:40:45.868287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.113 [2024-07-11 21:40:45.868319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.113 [2024-07-11 21:40:45.868337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.113 [2024-07-11 21:40:45.868576] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.113 [2024-07-11 21:40:45.868832] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.113 [2024-07-11 21:40:45.868858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.114 [2024-07-11 21:40:45.868874] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.114 [2024-07-11 21:40:45.872458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.881622] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.882034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.882067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.882086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.882326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.882570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.882606] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.882623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.886228] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.895563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.895980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.896012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.896030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.896269] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.896513] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.896538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.896555] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.900155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.909475] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.909868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.909902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.909921] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.910161] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.910406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.910431] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.910447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.914041] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.923353] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.923766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.923799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.923817] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.924056] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.924299] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.924324] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.924341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.927935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.937243] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.937642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.937675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.937693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.937945] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.938188] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.938214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.938231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.941822] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.951144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.951557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.951588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.951606] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.951857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.952100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.952135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.952151] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.955738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.965074] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.965454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.965487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.965505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.965745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.966009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.966034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.966049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.969639] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.978997] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.979415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.979447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.979465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.979711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.979966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.979992] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.980018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.983630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:45.992995] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:45.993416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:45.993449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:45.993467] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:45.993706] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:45.993962] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:45.993989] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:45.994005] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:45.997595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:46.006943] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:46.007332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:46.007364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:46.007382] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:46.007621] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:46.007879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:46.007905] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.374 [2024-07-11 21:40:46.007921] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.374 [2024-07-11 21:40:46.011514] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.374 [2024-07-11 21:40:46.020840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.374 [2024-07-11 21:40:46.021242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.374 [2024-07-11 21:40:46.021274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.374 [2024-07-11 21:40:46.021292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.374 [2024-07-11 21:40:46.021531] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.374 [2024-07-11 21:40:46.021785] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.374 [2024-07-11 21:40:46.021811] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.021832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.025420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.034727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.035122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.035154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.035173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.035413] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.035656] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.035680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.035696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.039284] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.048595] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.048967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.048999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.049017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.049257] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.049501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.049525] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.049542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.053134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.062447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.062844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.062876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.062894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.063133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.063376] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.063401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.063417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.067014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.076333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.076730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.076774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.076794] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.077034] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.077278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.077303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.077320] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.080911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.090226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.090647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.090678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.090697] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.090947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.091193] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.091217] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.091233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.094824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.104136] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.104544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.104575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.104594] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.104844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.105089] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.105114] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.105130] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.108714] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 [2024-07-11 21:40:46.118042] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.118444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.118476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.118494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.118733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.118993] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.119018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.119035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 1059115 Killed "${NVMF_APP[@]}" "$@" 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 [2024-07-11 21:40:46.122618] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=1060148 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 1060148 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 1060148 ']' 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:11.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:11.375 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.375 [2024-07-11 21:40:46.131934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.375 [2024-07-11 21:40:46.132359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.375 [2024-07-11 21:40:46.132391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.375 [2024-07-11 21:40:46.132409] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.375 [2024-07-11 21:40:46.132648] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.375 [2024-07-11 21:40:46.132903] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.375 [2024-07-11 21:40:46.132928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.375 [2024-07-11 21:40:46.132945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.375 [2024-07-11 21:40:46.136529] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.145924] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.146336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.146367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.146385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.146624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.146882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.146912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.146929] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.150552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.159895] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.160305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.160337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.160354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.160593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.160847] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.160872] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.160887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.164472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.173799] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.174184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.174215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.174233] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.174472] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.174715] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.174739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.174794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.176703] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:11.635 [2024-07-11 21:40:46.176792] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:11.635 [2024-07-11 21:40:46.178140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.187318] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.187700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.187744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.187772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.188005] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.188262] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.188286] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.188300] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.191404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.200671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.201126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.201168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.201185] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.201428] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.201627] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.201646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.201659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.204609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 EAL: No free 2048 kB hugepages reported on node 1 00:34:11.635 [2024-07-11 21:40:46.213953] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.214360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.214386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.214401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.214650] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.214860] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.214879] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.214892] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.218385] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.227736] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.228129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.228157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.228173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.228416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.228616] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.228636] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.228648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.232156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.241716] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.242128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.242160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.242178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.242434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.242632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.242652] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.242664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.245778] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:34:11.635 [2024-07-11 21:40:46.246117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.255687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.256257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.256298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.635 [2024-07-11 21:40:46.256321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.635 [2024-07-11 21:40:46.256581] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.635 [2024-07-11 21:40:46.256794] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.635 [2024-07-11 21:40:46.256823] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.635 [2024-07-11 21:40:46.256840] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.635 [2024-07-11 21:40:46.260339] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.635 [2024-07-11 21:40:46.269687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.635 [2024-07-11 21:40:46.270180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.635 [2024-07-11 21:40:46.270215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.270236] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.270490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.270691] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.270710] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.270724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.274249] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.283578] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.283953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.283981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.284006] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.284243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.284444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.284463] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.284476] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.287941] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.297446] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.298018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.298068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.298090] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.298336] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.298557] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.298577] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.298593] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.302117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.311276] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.311773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.311824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.311844] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.312088] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.312290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.312309] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.312324] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.315807] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.325092] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.325481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.325509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.325526] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.325791] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.326005] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.326048] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.326062] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.329585] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.336738] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:11.636 [2024-07-11 21:40:46.336774] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:11.636 [2024-07-11 21:40:46.336788] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:11.636 [2024-07-11 21:40:46.336799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:11.636 [2024-07-11 21:40:46.336808] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:11.636 [2024-07-11 21:40:46.336982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:34:11.636 [2024-07-11 21:40:46.337047] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:11.636 [2024-07-11 21:40:46.337044] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:34:11.636 [2024-07-11 21:40:46.338587] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.339013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.339042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.339058] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.339289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.339503] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.339524] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.339538] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.342676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.352097] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.352648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.352686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.352706] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.352940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.353175] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.353196] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.353213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.356430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.365621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.366170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.366208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.366237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.366475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.366692] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.366713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.366730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.369931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.379220] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.379768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.379807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.379827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.380051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.380283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.380305] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.380322] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.383524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.636 [2024-07-11 21:40:46.392793] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.636 [2024-07-11 21:40:46.393267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.636 [2024-07-11 21:40:46.393302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.636 [2024-07-11 21:40:46.393321] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.636 [2024-07-11 21:40:46.393542] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.636 [2024-07-11 21:40:46.393774] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.636 [2024-07-11 21:40:46.393797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.636 [2024-07-11 21:40:46.393813] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.636 [2024-07-11 21:40:46.397036] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.896 [2024-07-11 21:40:46.406531] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.896 [2024-07-11 21:40:46.407066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-11 21:40:46.407105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.896 [2024-07-11 21:40:46.407125] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.896 [2024-07-11 21:40:46.407365] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.896 [2024-07-11 21:40:46.407582] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.896 [2024-07-11 21:40:46.407612] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.896 [2024-07-11 21:40:46.407629] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.896 [2024-07-11 21:40:46.411050] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.896 [2024-07-11 21:40:46.420090] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.896 [2024-07-11 21:40:46.420491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.896 [2024-07-11 21:40:46.420521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.896 [2024-07-11 21:40:46.420539] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.896 [2024-07-11 21:40:46.420794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.896 [2024-07-11 21:40:46.421015] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.896 [2024-07-11 21:40:46.421036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.896 [2024-07-11 21:40:46.421052] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.896 [2024-07-11 21:40:46.424238] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.896 [2024-07-11 21:40:46.433585] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.433964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.433993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.434009] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 [2024-07-11 21:40:46.434224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 [2024-07-11 21:40:46.434443] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.434464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.434478] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.437713] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.897 [2024-07-11 21:40:46.447181] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.447529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.447564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.447580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 [2024-07-11 21:40:46.447805] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 [2024-07-11 21:40:46.448024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.448045] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.448081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.451338] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 [2024-07-11 21:40:46.460711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.461134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.461163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.461178] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 [2024-07-11 21:40:46.461394] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 [2024-07-11 21:40:46.461621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.461641] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.461655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.897 [2024-07-11 21:40:46.464867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 [2024-07-11 21:40:46.469250] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:11.897 [2024-07-11 21:40:46.474138] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.474537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.474566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.474583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.897 [2024-07-11 21:40:46.474806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.897 [2024-07-11 21:40:46.475026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.475047] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.897 [2024-07-11 21:40:46.475061] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.478418] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 [2024-07-11 21:40:46.487783] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.488213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.488241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.488257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 [2024-07-11 21:40:46.488492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 [2024-07-11 21:40:46.488714] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.488748] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.488772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.491989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 [2024-07-11 21:40:46.501380] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.501851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.501887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.501907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 [2024-07-11 21:40:46.502142] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 [2024-07-11 21:40:46.502358] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.502378] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.502395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.505602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 Malloc0 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.897 [2024-07-11 21:40:46.514928] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.515434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.515465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 [2024-07-11 21:40:46.515485] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 [2024-07-11 21:40:46.515719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 [2024-07-11 21:40:46.515964] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.515987] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.516003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.519280] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.897 [2024-07-11 21:40:46.528429] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.528789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:11.897 [2024-07-11 21:40:46.528827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe0fed0 with addr=10.0.0.2, port=4420 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.897 [2024-07-11 21:40:46.528845] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe0fed0 is same with the state(5) to be set 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:11.897 [2024-07-11 21:40:46.529060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe0fed0 (9): Bad file descriptor 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:11.897 [2024-07-11 21:40:46.529279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:34:11.897 [2024-07-11 21:40:46.529301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:34:11.897 [2024-07-11 21:40:46.529314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:34:11.897 [2024-07-11 21:40:46.532588] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:34:11.897 [2024-07-11 21:40:46.532708] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:11.897 21:40:46 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 1059371 00:34:11.897 [2024-07-11 21:40:46.542061] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:34:11.897 [2024-07-11 21:40:46.581854] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:34:21.879 00:34:21.879 Latency(us) 00:34:21.879 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:21.879 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:34:21.879 Verification LBA range: start 0x0 length 0x4000 00:34:21.879 Nvme1n1 : 15.02 6656.55 26.00 8471.81 0.00 8436.19 600.75 19612.25 00:34:21.879 =================================================================================================================== 00:34:21.879 Total : 6656.55 26.00 8471.81 0.00 8436.19 600.75 19612.25 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:21.879 rmmod nvme_tcp 00:34:21.879 rmmod nvme_fabrics 00:34:21.879 rmmod nvme_keyring 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 1060148 ']' 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 1060148 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 1060148 ']' 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 1060148 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1060148 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1060148' 00:34:21.879 killing process with pid 1060148 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 1060148 00:34:21.879 21:40:55 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 1060148 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:21.879 21:40:56 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.778 21:40:58 nvmf_tcp.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:23.778 00:34:23.778 real 0m22.282s 00:34:23.778 user 0m59.661s 00:34:23.778 sys 0m4.207s 00:34:23.778 21:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:23.778 21:40:58 nvmf_tcp.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:34:23.778 ************************************ 00:34:23.778 END TEST nvmf_bdevperf 00:34:23.778 ************************************ 00:34:23.778 21:40:58 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:23.778 21:40:58 nvmf_tcp -- nvmf/nvmf.sh@123 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:23.778 21:40:58 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:23.778 21:40:58 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:23.778 21:40:58 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:23.778 ************************************ 00:34:23.778 START TEST nvmf_target_disconnect 00:34:23.778 ************************************ 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:34:23.778 * Looking for test storage... 00:34:23.778 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:34:23.778 21:40:58 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:25.688 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:25.688 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:25.688 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:25.688 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:25.688 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:25.688 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:34:25.688 00:34:25.688 --- 10.0.0.2 ping statistics --- 00:34:25.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.688 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:25.688 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:25.688 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:34:25.688 00:34:25.688 --- 10.0.0.1 ping statistics --- 00:34:25.688 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:25.688 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:25.688 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:25.688 ************************************ 00:34:25.688 START TEST nvmf_target_disconnect_tc1 00:34:25.689 ************************************ 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:34:25.689 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:25.689 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.947 [2024-07-11 21:41:00.471513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:25.947 [2024-07-11 21:41:00.471576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7b7e70 with addr=10.0.0.2, port=4420 00:34:25.947 [2024-07-11 21:41:00.471613] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:34:25.947 [2024-07-11 21:41:00.471633] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:34:25.947 [2024-07-11 21:41:00.471646] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:34:25.947 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:34:25.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:34:25.947 Initializing NVMe Controllers 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:34:25.947 00:34:25.947 real 0m0.096s 00:34:25.947 user 0m0.041s 00:34:25.947 sys 0m0.051s 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:34:25.947 ************************************ 00:34:25.947 END TEST nvmf_target_disconnect_tc1 00:34:25.947 ************************************ 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:25.947 ************************************ 00:34:25.947 START TEST nvmf_target_disconnect_tc2 00:34:25.947 ************************************ 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1063180 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1063180 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1063180 ']' 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:25.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:25.947 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:25.947 [2024-07-11 21:41:00.582623] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:25.947 [2024-07-11 21:41:00.582712] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:25.947 EAL: No free 2048 kB hugepages reported on node 1 00:34:25.947 [2024-07-11 21:41:00.658291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:26.206 [2024-07-11 21:41:00.758088] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.206 [2024-07-11 21:41:00.758155] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.206 [2024-07-11 21:41:00.758172] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.206 [2024-07-11 21:41:00.758186] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.206 [2024-07-11 21:41:00.758197] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.206 [2024-07-11 21:41:00.758285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:26.206 [2024-07-11 21:41:00.758341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:26.206 [2024-07-11 21:41:00.758406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:26.206 [2024-07-11 21:41:00.758409] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 Malloc0 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.206 [2024-07-11 21:41:00.941273] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:26.206 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.207 [2024-07-11 21:41:00.969538] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:26.207 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:26.466 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:26.466 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=1063320 00:34:26.466 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:34:26.466 21:41:00 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:26.466 EAL: No free 2048 kB hugepages reported on node 1 00:34:28.378 21:41:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 1063180 00:34:28.378 21:41:02 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 [2024-07-11 21:41:02.994776] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Write completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 Read completed with error (sct=0, sc=8) 00:34:28.378 starting I/O failed 00:34:28.378 [2024-07-11 21:41:02.995113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:34:28.378 [2024-07-11 21:41:02.995309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.378 [2024-07-11 21:41:02.995340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.378 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.995463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.995488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.995659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.995683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.995822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.995849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.995979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.996161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.996293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.996481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.996624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.996796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.996928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.996955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.997113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.997140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.997272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.997299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.997459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.997502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.997728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.997766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.997900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.997929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.998036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.998063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.998200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.998227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.998329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.998356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.998527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.998570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.998700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.998728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.998871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.998899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.999028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.999055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.999255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.999282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.999424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.999450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.999580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.999607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.999759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.999805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:02.999907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:02.999934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.000049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.000092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.000268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.000296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.000505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.000532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.000667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.000694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.000842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.000869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.001038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.001180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.001347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.001512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.001673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.001843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.001978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.002006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.002141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.379 [2024-07-11 21:41:03.002169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.379 qpair failed and we were unable to recover it. 00:34:28.379 [2024-07-11 21:41:03.002321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.002365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.002553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.002580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.002762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.002806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.002939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.002965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.003065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.003092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.003218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.003246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.003381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.003407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.003571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.003599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.003728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.003763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.003871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.003899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.004034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.004061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.004231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.004258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.004385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.004413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.004570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.004597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.004730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.004763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.004902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.004930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.005065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.005092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.005227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.005254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.005369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.005396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.005554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.005581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.005714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.005741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.005963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.005990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.006144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.006171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.006328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.006371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.006554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.006581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.006685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.006712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.006851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.006878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.006993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.007020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.007151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.007177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.007308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.007335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.007494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.007521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.007681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.007707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.007816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.007842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 [2024-07-11 21:41:03.007976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.380 [2024-07-11 21:41:03.008002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.380 qpair failed and we were unable to recover it. 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Write completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Write completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Write completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Write completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Write completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Write completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.380 Read completed with error (sct=0, sc=8) 00:34:28.380 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 [2024-07-11 21:41:03.008305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Read completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 Write completed with error (sct=0, sc=8) 00:34:28.381 starting I/O failed 00:34:28.381 [2024-07-11 21:41:03.008625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:34:28.381 [2024-07-11 21:41:03.008825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.008866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.009009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.009037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.009197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.009225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.009333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.009360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.009498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.009527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.009707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.009737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.009897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.009923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.010032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.010058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.010191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.010218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.010355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.010397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.010511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.010540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.010711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.010741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.010915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.010942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.011047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.011077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.011183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.011210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.011319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.011346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.011531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.011561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.011716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.011742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.011886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.011913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.012041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.012068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.012865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.012893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.013052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.013078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.013207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.013235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.013376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.013418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.013532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.381 [2024-07-11 21:41:03.013563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.381 qpair failed and we were unable to recover it. 00:34:28.381 [2024-07-11 21:41:03.013725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.013759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.013895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.013922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.014084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.014110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.014243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.014271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.014403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.014430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.014557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.014584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.014762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.014789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.014921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.014948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.015089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.015115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.015261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.015288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.015419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.015446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.015575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.015601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.015737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.015771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.015906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.015933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.016072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.016098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.016229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.016260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.016379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.016408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.016557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.016584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.016689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.016716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.016857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.016884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.017015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.017042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.017144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.017171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.017301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.017327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.017462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.017488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.017655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.017696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.017856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.017897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.018055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.018102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.018256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.018299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.018403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.018432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.018619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.382 [2024-07-11 21:41:03.018658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.382 qpair failed and we were unable to recover it. 00:34:28.382 [2024-07-11 21:41:03.018810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.018839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.018982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.019009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.019148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.019175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.019371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.019434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.019578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.019608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.019758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.019786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.019915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.019941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.020097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.020124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.020303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.020332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.020508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.020588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.020741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.020800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.020936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.020962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.021176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.021232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.021431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.021458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.021595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.021622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.021792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.021819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.021949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.021975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.022142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.022169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.022272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.022298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.022425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.022456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.022619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.022647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.022805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.022832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.022946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.022972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.023110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.023137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.023308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.023372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.023531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.023557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.023717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.023744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.023908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.383 [2024-07-11 21:41:03.023935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.383 qpair failed and we were unable to recover it. 00:34:28.383 [2024-07-11 21:41:03.024032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.024059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.024203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.024229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.024359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.024385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.024536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.024566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.024719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.024746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.024886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.024912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.025031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.025072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.025192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.025233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.025404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.025432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.025566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.025594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.025749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.025782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.025892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.025920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.026060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.026103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.026238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.026265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.026421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.026449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.026582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.026611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.026737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.026776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.026923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.026950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.027102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.027144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.027259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.027287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.027421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.027448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.027584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.027613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.027742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.027795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.027933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.027959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.028102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.028146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.028336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.028366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.028534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.028578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.028691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.028717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.028886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.028913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.029944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.029973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.030112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.030142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.030321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.030348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.030497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.030529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.030668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.384 [2024-07-11 21:41:03.030701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.384 qpair failed and we were unable to recover it. 00:34:28.384 [2024-07-11 21:41:03.030832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.030878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.031007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.031056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.031240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.031285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.031440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.031487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.031646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.031673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.031784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.031812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.031922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.031948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.032081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.032108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.032247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.032274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.032406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.032434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.032571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.032600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.032736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.032770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.032911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.032938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.033092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.033138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.033263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.033291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.033428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.033454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.033584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.033611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.033748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.033780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.033942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.033969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.034132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.034159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.034290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.034449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.034476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.034605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.034632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.034771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.034800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.034936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.034963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.035076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.035116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.035261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.035290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.035410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.035452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.035579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.035606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.035712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.035737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.035917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.035941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.036076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.036100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.036250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.036277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.036398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.036426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.036606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.036631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.036736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.036767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.036903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.036928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.037033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.037057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.037201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.037228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.037349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.037376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.385 qpair failed and we were unable to recover it. 00:34:28.385 [2024-07-11 21:41:03.037581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.385 [2024-07-11 21:41:03.037628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.037763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.037790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.037924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.037949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.038107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.038195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.038407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.038456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.038621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.038647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.038751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.038785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.038926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.038952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.039114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.039162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.039444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.039496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.039634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.039661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.039770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.039799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.039955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.039985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.040155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.040201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.040331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.040358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.040493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.040520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.040684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.040711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.040821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.040848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.040979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.041118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.041310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.041444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.041617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.041772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.041932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.041959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.042091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.042122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.042255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.042303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.042464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.386 [2024-07-11 21:41:03.042491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.386 qpair failed and we were unable to recover it. 00:34:28.386 [2024-07-11 21:41:03.042658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.042685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.042795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.042822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.042933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.042960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.043070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.043098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.043240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.043267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.043427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.043454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.043596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.043623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.043738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.043773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.043932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.043960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.044095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.044121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.044224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.044252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.044419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.044447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.044582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.044608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.044714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.044741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.044868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.044896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.045922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.045950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.046088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.046115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.046246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.046273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.046380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.046407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.046533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.046560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.046695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.046722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.046882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.046923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.047043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.047071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.047230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.047257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.047495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.047522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.047631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.047658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.047763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.047790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.047894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.047923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.387 [2024-07-11 21:41:03.048955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.387 [2024-07-11 21:41:03.048983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.387 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.049113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.049141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.049294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.049321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.049431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.049458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.049586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.049613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.049716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.049744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.049865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.049892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.050956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.050983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.051926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.051954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.052063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.052090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.052196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.052224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.052337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.052364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.052517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.052558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.052697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.052726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.052842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.052870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.053956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.053983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.054956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.388 [2024-07-11 21:41:03.054984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.388 qpair failed and we were unable to recover it. 00:34:28.388 [2024-07-11 21:41:03.055118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.055145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.055297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.055325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.055462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.055490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.055593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.055621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.055767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.055795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.055927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.055972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.056087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.056118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.056301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.056328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.056441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.056468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.056641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.056672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.056828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.056857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.056969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.056996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.057115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.057142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.057298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.057342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.057473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.057500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.057628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.057655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.057794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.057822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.057930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.057957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.058093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.058120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.058270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.058299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.058432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.058459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.058599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.058640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.058747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.058783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.058905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.058946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.059081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.059110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.059293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.059320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.059422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.059449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.059624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.059652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.059785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.059812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.059936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.059966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.060129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.060157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.060290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.060317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.060492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.060521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.060649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.060676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.060811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.060840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.060968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.061008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.061114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.061142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.061283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.061311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.061420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.061447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.061546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.061573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.389 [2024-07-11 21:41:03.061705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.389 [2024-07-11 21:41:03.061731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.389 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.061839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.061867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.061968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.061995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.062123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.062152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.062300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.062329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.062493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.062522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.062629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.062658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.062766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.062794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.062959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.062987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.063120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.063148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.063324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.063352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.063459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.063486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.063688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.063716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.063824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.063852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.063979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.064007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.064211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.064258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.064553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.064583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.064762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.064793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.064952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.064982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.065127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.065154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.065445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.065499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.065684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.065710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.065864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.065893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.066032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.066067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.066200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.066228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.066374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.066415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.066578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.066607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.066740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.066777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.066898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.066926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.067033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.067073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.067230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.067275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.067512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.067545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.067698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.067726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.067875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.067903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.068026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.068056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.068310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.068337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.068497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.068524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.068659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.068687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.068819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.068847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.068954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.068997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.069233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.390 [2024-07-11 21:41:03.069294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.390 qpair failed and we were unable to recover it. 00:34:28.390 [2024-07-11 21:41:03.069501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.069529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.069660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.069687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.069823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.069851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.069978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.070036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.070197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.070224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.070380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.070407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.070526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.070567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.070703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.070731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.070889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.070918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.071029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.071076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.071220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.071250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.071414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.071458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.071614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.071642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.071805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.071833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.071970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.071997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.072100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.072126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.072256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.072283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.072448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.072477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.072636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.072665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.072823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.072850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.072983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.073010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.073157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.073209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.073479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.073537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.073722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.073757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.073894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.073921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.074032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.074060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.074193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.074221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.074362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.074432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.074642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.074696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.074857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.074885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.075045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.075090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.075223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.075250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.075347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.075373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.391 qpair failed and we were unable to recover it. 00:34:28.391 [2024-07-11 21:41:03.075529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.391 [2024-07-11 21:41:03.075555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.075761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.075787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.075893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.075918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.076058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.076104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.076300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.076328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.076485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.076512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.076651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.076678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.076842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.076869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.076971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.076998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.077131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.077159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.077317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.077344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.077501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.077528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.077623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.077649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.077800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.077841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.077979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.078007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.078145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.078173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.078415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.078447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.078593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.078620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.078763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.078792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.078928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.078957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.079101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.079131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.079285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.079312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.079450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.079476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.079645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.079691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.079860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.079890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.080048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.080095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.080279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.080309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.080450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.080494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.080651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.080678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.080786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.080812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.080947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.080974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.081142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.081170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.081373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.081401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.081531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.081558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.081690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.081717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.081853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.081880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.081995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.082022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.082190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.082231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.082393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.082420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.082561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.082591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.082773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.392 [2024-07-11 21:41:03.082800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.392 qpair failed and we were unable to recover it. 00:34:28.392 [2024-07-11 21:41:03.082932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.082958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.083111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.083141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.083388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.083448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.083604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.083631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.083790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.083817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.083920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.083945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.084054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.084081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.084221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.084247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.084382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.084408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.084524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.084565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.084731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.084767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.084925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.084952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.085100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.085145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.085303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.085330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.085490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.085517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.085654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.085682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.085819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.085847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.086002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.086029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.086202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.086230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.086364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.086392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.086504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.086545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.086722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.086770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.086938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.086967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.087083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.087112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.087246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.087291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.087410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.087441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.087595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.087623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.087779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.087807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.087960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.087986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.088272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.088325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.088504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.088531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.088631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.088656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.088793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.088822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.088973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.089014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.089126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.089155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.089291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.089319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.089521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.089549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.089762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.089790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.089903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.089931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.090117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.393 [2024-07-11 21:41:03.090145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.393 qpair failed and we were unable to recover it. 00:34:28.393 [2024-07-11 21:41:03.090274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.090301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.090498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.090527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.090665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.090695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.090860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.090901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.091019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.091065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.091225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.091252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.091389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.091416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.091603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.091655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.091815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.091844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.091977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.092005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.092149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.092179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.092359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.092387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.092514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.092541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.092675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.092703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.092851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.092879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.093074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.093102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.093213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.093240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.093346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.093373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.093490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.093531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.093690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.093731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.093881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.093910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.094034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.094063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.094225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.094270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.094538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.094594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.094744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.094782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.094962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.094990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.095151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.095179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.095317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.095345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.095493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.095525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.095678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.095711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.095845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.095886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.096029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.096056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.096169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.096198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.096368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.096397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.096571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.096598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.096733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.096765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.096874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.096902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.097057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.097086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.097228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.097258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.097421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.394 [2024-07-11 21:41:03.097450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.394 qpair failed and we were unable to recover it. 00:34:28.394 [2024-07-11 21:41:03.097616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.097658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.097821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.097850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.097957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.097985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.098156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.098183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.098342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.098370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.098502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.098530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.098685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.098713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.098885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.098929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.099050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.099081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.099309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.099336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.099469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.099498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.099607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.099636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.099769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.099797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.099941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.099987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.100139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.100215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.100375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.100402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.100539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.100567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.100675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.100702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.100821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.100852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.101014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.101044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.101251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.101279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.101413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.101440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.101550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.101577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.101719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.101747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.101877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.101926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.102053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.102099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.102304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.102331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.102460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.102488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.102586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.102612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.102775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.102827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.102966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.102994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.103156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.103184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.103288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.103315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.395 [2024-07-11 21:41:03.103485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.395 [2024-07-11 21:41:03.103512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.395 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.103646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.103673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.103784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.103810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.103944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.103971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.104077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.104106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.104242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.104269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.104425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.104452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.104585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.104613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.104727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.104762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.104911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.104954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.105112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.105139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.105276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.105303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.105458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.105488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.105634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.105661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.105763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.105790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.105891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.105918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.106056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.106101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.106225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.106273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.106438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.106467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.106602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.106629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.106786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.106835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.106967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.106993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.107151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.107182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.107360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.107404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.107538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.107566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.107699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.107727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.107916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.107958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.108145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.108191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.108376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.108421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.108530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.108557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.108696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.108725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.108896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.108925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.109035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.109061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.109196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.109233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.109397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.109425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.109562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.109590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.109751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.109802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.109951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.109980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.110112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.110139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.110274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.110301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.110434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.396 [2024-07-11 21:41:03.110461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.396 qpair failed and we were unable to recover it. 00:34:28.396 [2024-07-11 21:41:03.110645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.110674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.110810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.110838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.110983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.111030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.111223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.111250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.111387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.111414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.111550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.111578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.111714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.111742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.111883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.111910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.112069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.112095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.112235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.112263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.112397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.112423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.112598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.112638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.112777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.112806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.112984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.113014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.113224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.113251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.113386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.113429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.113594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.113624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.113827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.113868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.114012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.114052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.114179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.114226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.114436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.114464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.114625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.114652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.114778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.114812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.114970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.114997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.115153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.115180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.115287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.115314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.115452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.115479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.115637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.115665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.115763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.115789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.115918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.115963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.116149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.116183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.116344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.116371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.116506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.116534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.116665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.116692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.116872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.116902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.117102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.117143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.117308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.117336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.117495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.117525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.117707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.117734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.117860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.117890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.118008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.397 [2024-07-11 21:41:03.118038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.397 qpair failed and we were unable to recover it. 00:34:28.397 [2024-07-11 21:41:03.118191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.118230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.118406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.118452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.118617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.118651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.118809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.118837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.118969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.118998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.119114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.119143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.119295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.119322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.119427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.119453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.119555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.119582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.119696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.119723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.119853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.119894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.120059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.120087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.120189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.120217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.120352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.120380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.120513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.120539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.120675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.120701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.120858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.120905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.121092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.121122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.121240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.121267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.121373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.121401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.121531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.121559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.121718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.121749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.121910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.121957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.122120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.122165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.122311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.122355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.122495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.122522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.122690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.122717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.122888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.122916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.123018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.123045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.123206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.123233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.123367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.123395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.123560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.123587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.123722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.123750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.123892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.123936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.124090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.124136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.124298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.124343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.124525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.124565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.124710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.124740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.124910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.124941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.125109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.125139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.398 [2024-07-11 21:41:03.125400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.398 [2024-07-11 21:41:03.125453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.398 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.125625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.125655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.125812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.125840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.125978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.126005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.126135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.126162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.126315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.126344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.126526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.126553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.126712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.126739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.126863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.126904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.127059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.127103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.127255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.127287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.127422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.127452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.127640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.127667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.127801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.127829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.127939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.127965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.128100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.128127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.128260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.128286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.128463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.128492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.128648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.128678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.128817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.128845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.128980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.129007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.129141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.129169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.129330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.129375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.129523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.129567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.129712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.129761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.129893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.129922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.130042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.130072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.130207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.130251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.130401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.130433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.130585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.130612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.130746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.130784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.130897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.130923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.131068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.131109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.131266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.131297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.131441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.131471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.399 [2024-07-11 21:41:03.131635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.399 [2024-07-11 21:41:03.131666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.399 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.131829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.131869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.131987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.132015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.132202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.132246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.132403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.132449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.132583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.132610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.132780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.132808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.132938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.132965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.133075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.133103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.133239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.133267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.133473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.133527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.133705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.133732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.133870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.133897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.134054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.134088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.134213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.134245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.134355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.134385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.134545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.134572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.134684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.134710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.134877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.134904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.135038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.135065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.135218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.135248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.135426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.135453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.135565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.135589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.135748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.135781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.135918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.135942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.136079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.136105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.136206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.136232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.136411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.136438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.136592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.136618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.136791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.136818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.136969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.136996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.137151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.137177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.137313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.137341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.137478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.137505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.137612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.137636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.137799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.137825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.137954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.137980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.138136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.138165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.138287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.138314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.138499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.138528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.138690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.400 [2024-07-11 21:41:03.138716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.400 qpair failed and we were unable to recover it. 00:34:28.400 [2024-07-11 21:41:03.138853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.401 [2024-07-11 21:41:03.138879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.401 qpair failed and we were unable to recover it. 00:34:28.401 [2024-07-11 21:41:03.139014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.401 [2024-07-11 21:41:03.139040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.401 qpair failed and we were unable to recover it. 00:34:28.401 [2024-07-11 21:41:03.139172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.401 [2024-07-11 21:41:03.139198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.401 qpair failed and we were unable to recover it. 00:34:28.401 [2024-07-11 21:41:03.139358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.401 [2024-07-11 21:41:03.139384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.401 qpair failed and we were unable to recover it. 00:34:28.401 [2024-07-11 21:41:03.139502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.401 [2024-07-11 21:41:03.139529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.401 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.139705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.139731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.139871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.139897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.140031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.140056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.140210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.140238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.140357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.140385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.140532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.140559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.140698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.140726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.140890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.140920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.683 [2024-07-11 21:41:03.141066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.683 [2024-07-11 21:41:03.141093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.683 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.141282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.141306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.141441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.141466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.141574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.141599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.141706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.141731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.141856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.142043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.142069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.142200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.142226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.142361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.142387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.142548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.142592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.142696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.142721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.142844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.142870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.143029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.143222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.143385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.143523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.143685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.143847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.143989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.144130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.144297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.144508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.144655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.144800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.144926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.144953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.145111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.145141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.145287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.145321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.145502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.145529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.145656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.145683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.145789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.145815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.145975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.146001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.146143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.146176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.146435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.146465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.146582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.146612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.146724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.146761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.146896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.146936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.684 [2024-07-11 21:41:03.147145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.684 [2024-07-11 21:41:03.147176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.684 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.147327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.147356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.147490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.147517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.147650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.147678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.147837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.147884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.148044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.148071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.148202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.148229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.148429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.148457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.148595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.148622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.148762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.148790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.148937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.148985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.149099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.149127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.149287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.149314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.149410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.149435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.149566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.149592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.149757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.149784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.149896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.149922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.150105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.150134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.150276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.150304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.150435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.150478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.150661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.150688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.150845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.150872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.150998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.151025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.151129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.151157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.151333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.151386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.151554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.151582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.151742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.151777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.151936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.151963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.152086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.152114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.152272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.152299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.152457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.152488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.152617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.685 [2024-07-11 21:41:03.152644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.685 qpair failed and we were unable to recover it. 00:34:28.685 [2024-07-11 21:41:03.152815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.152841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.152971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.152997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.153099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.153126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.153275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.153304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.153487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.153515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.153619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.153647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.153760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.153787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.153898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.153927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.154103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.154132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.154349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.154379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.154526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.154555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.154716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.154743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.154866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.154892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.155023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.155049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.155205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.155232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.155422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.155451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.155622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.155652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.155810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.155837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.155996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.156023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.156184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.156213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.156359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.156389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.156532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.156563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.156745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.156800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.156932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.156958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.157114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.157140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.157278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.157308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.157457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.157502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.157653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.157679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.157817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.157844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.157974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.158001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.158165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.158195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.158398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.158428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.158597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.158626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.158769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.158813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.158950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.158978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.159109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.159136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.159244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.159272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.159459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.159489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.159633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.159666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.159789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.686 [2024-07-11 21:41:03.159816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.686 qpair failed and we were unable to recover it. 00:34:28.686 [2024-07-11 21:41:03.159946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.159981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.160153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.160179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.160354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.160383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.160619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.160649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.160816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.160844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.160973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.161001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.161126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.161152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.161311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.161340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.161484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.161514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.161621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.161650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.161814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.161843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.161982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.162140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.162268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.162427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.162557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.162741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.162941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.162968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.163149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.163177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.163329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.163355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.163487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.163513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.163687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.163713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.163848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.163875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.164920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.164948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.165131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.165161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.165337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.165364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.165522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.165549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.165709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.165736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.165878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.165906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.166001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.166029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.166180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.166207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.166363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.166389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.687 [2024-07-11 21:41:03.166550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.687 [2024-07-11 21:41:03.166581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.687 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.166690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.166717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.166842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.166869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.166981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.167008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.167159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.167188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.167347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.167374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.167503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.167530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.167708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.167737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.167895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.167924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.168080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.168108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.168236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.168279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.168401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.168430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.168546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.168576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.168759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.168787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.168952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.168981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.169126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.169155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.169332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.169360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.169473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.169499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.169632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.169659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.169794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.169824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.169986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.170146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.170330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.170510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.170653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.170834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.170972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.170999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.171140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.171181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.171297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.171325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.171459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.171486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.171654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.171681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.171851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.688 [2024-07-11 21:41:03.171879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.688 qpair failed and we were unable to recover it. 00:34:28.688 [2024-07-11 21:41:03.171979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.172004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.172140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.172168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.172271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.172297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.172483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.172514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.172702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.172729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.172896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.172923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.173076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.173107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.173287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.173314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.173473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.173500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.173613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.173639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.173743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.173778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.173935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.173960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.174094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.174119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.174220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.174245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.174375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.174403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.174598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.174625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.174800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.174831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.175010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.175038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.175198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.175224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.175347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.175374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.175526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.175556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.175678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.175706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.175853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.175897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.176062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.176242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.176370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.176528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.176692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.176854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.176988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.177014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.177169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.177196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.177357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.177386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.177559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.177585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.177778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.177828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.177930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.177957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.178126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.178160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.178303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.178332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.178467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.178493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.178628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.178655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.178849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.178876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.689 [2024-07-11 21:41:03.178981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.689 [2024-07-11 21:41:03.179009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.689 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.179114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.179141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.179286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.179312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.179443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.179470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.179601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.179630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.179764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.179791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.179925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.179951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.180087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.180116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.180225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.180253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.180419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.180446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.180558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.180584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.180718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.180745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.180886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.180913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.181046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.181073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.181203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.181230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.181389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.181416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.181582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.181612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.181786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.181813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.181964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.182140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.182294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.182467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.182630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.182799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.182960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.182988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.183101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.183128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.183227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.183254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.183416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.183442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.183571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.183598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.183765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.183792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.183907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.183934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.184067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.184093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.184205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.184230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.184367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.184393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.184495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.184520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.184804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.184835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.185024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.185051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.185183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.185210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.185321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.185348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.185516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.185543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.185648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.185675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.690 qpair failed and we were unable to recover it. 00:34:28.690 [2024-07-11 21:41:03.185834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.690 [2024-07-11 21:41:03.185861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.185991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.186018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.186166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.186196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.186374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.186404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.186587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.186614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.186766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.186797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.186950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.186977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.187082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.187108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.187248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.187274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.187451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.187479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.187624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.187654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.187840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.187868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.188029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.188055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.188158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.188201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.188372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.188401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.188507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.188549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.188705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.188735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.188891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.188917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.189051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.189077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.189263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.189290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.189444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.189471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.189610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.189643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.189784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.189815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.189973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.190000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.190161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.190186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.190351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.190377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.190507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.190535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.190697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.190726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.190878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.190904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.191018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.191045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.191150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.191176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.191330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.191357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.191496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.191523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.191681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.191707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.191908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.191935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.192100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.192127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.192285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.192312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.192412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.192439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.192597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.192627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.192817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.691 [2024-07-11 21:41:03.192845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.691 qpair failed and we were unable to recover it. 00:34:28.691 [2024-07-11 21:41:03.192979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.193135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.193291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.193448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.193585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.193713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.193880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.193907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.194065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.194091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.194198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.194225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.194384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.194426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.194597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.194626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.194809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.194836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.194969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.194995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.195151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.195181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.195369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.195396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.195550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.195592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.195712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.195739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.195892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.195920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.196071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.196100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.196238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.196267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.196398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.196424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.196583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.196613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.196748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.196784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.196920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.196946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.197083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.197110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.197242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.197269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.197406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.197433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.197631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.197658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.197818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.197846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.197980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.198007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.198169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.198197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.198351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.692 [2024-07-11 21:41:03.198377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.692 qpair failed and we were unable to recover it. 00:34:28.692 [2024-07-11 21:41:03.198517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.198543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.198656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.198683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.198841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.198868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.199967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.199993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.200168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.200196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.200321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.200347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.200484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.200511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.200664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.200691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.200788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.200814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.200976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.201144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.201311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.201510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.201637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.201778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.201963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.201989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.202122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.202148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.202259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.202286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.202419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.202445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.202584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.202611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.202744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.202794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.202964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.202994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.203121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.203147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.203281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.203313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.203503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.203533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.203718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.203745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.203860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.203887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.203996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.204023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.204176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.204205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.204385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.204412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.204520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.204547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.204678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.204705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.204905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.204935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.205072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.205102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.693 qpair failed and we were unable to recover it. 00:34:28.693 [2024-07-11 21:41:03.205279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.693 [2024-07-11 21:41:03.205306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.205458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.205486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.205604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.205645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.205784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.205812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.205968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.205994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.206132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.206159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.206289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.206332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.206462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.206488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.206617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.206643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.206802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.206857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.207007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.207036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.207208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.207237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.207366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.207393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.207528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.207554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.207684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.207710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.207888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.207915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.208046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.208076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.208191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.208217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.208382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.208411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.208561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.208590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.208721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.208771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.208926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.208953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.209061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.209087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.209260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.209286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.209392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.209419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.209558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.209584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.209771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.209800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.209972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.210002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.210175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.210202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.210336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.210362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.210547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.210576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.210687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.210717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.210856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.210883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.210994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.211022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.211135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.211162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.211315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.211344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.211519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.211546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.211700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.211727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.211909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.211938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.212106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.212134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.212284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.694 [2024-07-11 21:41:03.212311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.694 qpair failed and we were unable to recover it. 00:34:28.694 [2024-07-11 21:41:03.212442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.212483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.212630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.212660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.212814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.212845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.213027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.213203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.213339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.213511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.213668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.213833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.213989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.214017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.214137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.214167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.214307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.214334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.214493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.214519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.214702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.214732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.214873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.214903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.215085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.215116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.215350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.215403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.215550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.215579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.215765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.215794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.215973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.215999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.216174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.216204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.216374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.216403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.216549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.216579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.216748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.216780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.216939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.216965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.217123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.217167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.217347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.217374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.217513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.217539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.217642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.217668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.217844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.217872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.217983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.218170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.218296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.218455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.218635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.218782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.218954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.218981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.219114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.219142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.219308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.219336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.219492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.219519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.695 qpair failed and we were unable to recover it. 00:34:28.695 [2024-07-11 21:41:03.219630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.695 [2024-07-11 21:41:03.219657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.219835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.219880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.220944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.220971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.221121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.221151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.221299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.221326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.221429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.221457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.221592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.221619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.221750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.221784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.221990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.222017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.222116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.222147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.222259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.222287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.222474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.222501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.222624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.222651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.222822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.222849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.222980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.223008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.223186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.223216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.223367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.223398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.223529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.223557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.223661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.223688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.223844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.223871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.224029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.224058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.224217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.224243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.224376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.224404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.224576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.224606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.224759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.224789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.224916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.224943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.225049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.225076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.225268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.225295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.225405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.225432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.225538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.225565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.225722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.225749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.225886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.225913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.226074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.226104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.226256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.226285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.226419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.696 [2024-07-11 21:41:03.226463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.696 qpair failed and we were unable to recover it. 00:34:28.696 [2024-07-11 21:41:03.226605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.226636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.226822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.226851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.227009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.227036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.227144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.227172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.227299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.227327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.227464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.227491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.227652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.227679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.227814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.227859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.228032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.228061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.228175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.228205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.228330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.228359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.228506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.228533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.228735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.228768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.228906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.228934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.229063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.229094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.229226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.229253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.229399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.229429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.229609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.229636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.229733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.229762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.229901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.229928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.230063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.230089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.230220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.230247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.230407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.230434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.230568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.230594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.230728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.230771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.230902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.230949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.231081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.231107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.231281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.231311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.231460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.231490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.231633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.231661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.231824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.231851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.231959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.231987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.232134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.232163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.697 [2024-07-11 21:41:03.232332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.697 [2024-07-11 21:41:03.232358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.697 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.232490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.232516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.232645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.232671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.232815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.232846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.232998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.233027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.233204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.233230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.233363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.233407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.233552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.233581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.233706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.233735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.233870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.233896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.234051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.234094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.234278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.234305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.234437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.234465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.234660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.234687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.234796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.234840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.234984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.235013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.235159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.235189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.235370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.235397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.235495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.235536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.235679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.235709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.235857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.235888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.236022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.236053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.236189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.236216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.236401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.236431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.236593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.236620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.236761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.236788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.236897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.236925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.237087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.237113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.237266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.237308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.237462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.237489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.237639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.237675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.237827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.237857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.238044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.238071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.238202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.238229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.238336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.238364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.238553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.238583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.238727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.238763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.238923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.238950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.239098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.239127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.239245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.239294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.239408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.698 [2024-07-11 21:41:03.239433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.698 qpair failed and we were unable to recover it. 00:34:28.698 [2024-07-11 21:41:03.239577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.239604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.239773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.239804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.239951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.239982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.240119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.240145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.240273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.240299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.240404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.240431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.240626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.240653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.240789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.240818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.240949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.240975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.241108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.241134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.241262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.241291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.241429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.241458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.241626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.241656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.241833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.241860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.241963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.241990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.242156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.242182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.242312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.242339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.242467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.242493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.242650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.242679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.242849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.242879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.243027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.243058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.243172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.243198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.243329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.243357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.243507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.243536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.243685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.243712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.243832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.243860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.244058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.244085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.244215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.244241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.244423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.244450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.244553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.244579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.244738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.244774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.244899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.244928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.245080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.245106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.245241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.245268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.245463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.245492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.245661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.245691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.245846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.245873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.245983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.246009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.246173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.246207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.246344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.699 [2024-07-11 21:41:03.246372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.699 qpair failed and we were unable to recover it. 00:34:28.699 [2024-07-11 21:41:03.246517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.246543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.246648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.246672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.246863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.246893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.247062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.247091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.247220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.247248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.247361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.247388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.247524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.247551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.247713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.247743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.247930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.247957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.248094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.248120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.248276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.248303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.248407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.248433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.248577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.248618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.248731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.248767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.248905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.248933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.249048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.249074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.249198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.249244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.249368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.249412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.249550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.249577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.249711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.249739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.249851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.249883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.250936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.250963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.251109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.251139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.251312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.251357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.251516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.251543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.251698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.251725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.251891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.251924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.252075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.252106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.252220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.252249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.252366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.252396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.252536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.252565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.252738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.252775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.252942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.252983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.253118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.253149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.253278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.700 [2024-07-11 21:41:03.253323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.700 qpair failed and we were unable to recover it. 00:34:28.700 [2024-07-11 21:41:03.253469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.253499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.253648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.253677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.253833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.253861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.253998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.254024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.254140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.254184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.254343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.254417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.254602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.254634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.254786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.254815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.254922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.254948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.255052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.255079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.255200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.255230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.255365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.255410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.255524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.255554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.255699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.255728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.255861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.255888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.256062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.256203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.256362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.256538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.256743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.256880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.256985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.257011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.257145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.257176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.257312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.257356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.257498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.257527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.257684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.257710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.257861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.257888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.258040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.258070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.258233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.258283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.258422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.258452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.258624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.258651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.258779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.258806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.258942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.258969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.259129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.259188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.701 [2024-07-11 21:41:03.259366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.701 [2024-07-11 21:41:03.259394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.701 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.259604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.259634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.259774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.259800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.259904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.259930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.260076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.260105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.260273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.260303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.260438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.260468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.260628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.260653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.260779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.260806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.260932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.260959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.261091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.261135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.261340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.261373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.261547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.261581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.261707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.261734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.261900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.261927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.262082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.262151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.262264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.262295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.262440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.262469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.262629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.262666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.262862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.262910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.263016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.263063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.263245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.263273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.263450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.263498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.263638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.263679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.263820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.263847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.263962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.263988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.264123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.264165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.264366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.264396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.264536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.264566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.264724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.264865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.264891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.265074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.265252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.265411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.265541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.265701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.265897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.265998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.266022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.266143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.266170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.266266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.266294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.266424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.266451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.702 qpair failed and we were unable to recover it. 00:34:28.702 [2024-07-11 21:41:03.266566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.702 [2024-07-11 21:41:03.266593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.266718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.266744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.266881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.266908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.267969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.267995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.268145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.268175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.268328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.268355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.268465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.268492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.268595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.268636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.268777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.268820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.268924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.268950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.269103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.269234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.269397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.269544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.269721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.269882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.269994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.270176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.270340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.270500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.270665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.270827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.270965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.270991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.271119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.271247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.271367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.271500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.271695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.271878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.271985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.272150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.272287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.272418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.272573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.272801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.703 [2024-07-11 21:41:03.272936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.703 [2024-07-11 21:41:03.272960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.703 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.273068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.273093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.273217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.273245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.273405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.273431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.273563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.273589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.273732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.273764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.273924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.273952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.274087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.274114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.274246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.274292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.274413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.274443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.274563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.274593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.274746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.274784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.274926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.274953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.275096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.275126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.275257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.275288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.275411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.275439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.275574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.275601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.275729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.275761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.275864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.275896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.276951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.276978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.277159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.277189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.277318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.277345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.277449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.277475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.277603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.277630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.277778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.277807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.277943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.277970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.278154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.278184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.278324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.278353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.278496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.278526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.278685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.278712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.278820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.278846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.278950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.278977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.279078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.279105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.279218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.279245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.279421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.279450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.704 [2024-07-11 21:41:03.279581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.704 [2024-07-11 21:41:03.279610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.704 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.279766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.279796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.279958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.279985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.280129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.280156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.280312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.280339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.280504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.280534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.280652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.280678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.280815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.280841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.280965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.280993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.281129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.281158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.281317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.281344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.281481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.281508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.281667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.281694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.281820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.281847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.281988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.282015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.282148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.282174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.282310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.282340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.282479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.282508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.282672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.282702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.282857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.282885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.283016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.283043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.283195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.283238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.283372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.283399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.283528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.283572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.283679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.283708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.283928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.283960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.284089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.284116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.284212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.284238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.284364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.284393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.284502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.284532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.284663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.284690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.284835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.284862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.285047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.285077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.285249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.285279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.285428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.285455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.705 qpair failed and we were unable to recover it. 00:34:28.705 [2024-07-11 21:41:03.285559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.705 [2024-07-11 21:41:03.285586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.285759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.285786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.285926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.285953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.286110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.286141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.286292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.286322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.286469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.286499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.286669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.286699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.286835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.286863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.286999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.287026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.287143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.287170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.287352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.287379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.287483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.287510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.287648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.287675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.287813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.287844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.287964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.288164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.288299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.288431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.288583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.288791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.288956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.288983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.289154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.289181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.289299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.289326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.289484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.289511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.289660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.289690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.289811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.289842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.289995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.290023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.290142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.290169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.290273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.290299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.290495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.290522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.290633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.290664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.290827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.290854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.290992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.291020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.291153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.291183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.291355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.291385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.291509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.291538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.291691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.291719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.291866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.291893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.292056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.292086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.292237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.292265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.292372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.292399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.706 qpair failed and we were unable to recover it. 00:34:28.706 [2024-07-11 21:41:03.292564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.706 [2024-07-11 21:41:03.292591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.292728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.292760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.292934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.292961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.293102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.293129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.293258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.293288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.293460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.293490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.293676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.293703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.293860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.293890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.294007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.294037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.294157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.294188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.294317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.294344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.294503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.294530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.294687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.294717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.294863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.294891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.295951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.295981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.296133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.296162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.296284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.296311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.296427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.296454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.296623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.296653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.296776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.296807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.296939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.296966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.297103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.297130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.297284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.297314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.297467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.297495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.297626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.297654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.297776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.297803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.297939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.297966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.298099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.298129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.298257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.298284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.298393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.298420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.298561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.298587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.298717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.298744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.298883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.298910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.299038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.299081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.299202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.299233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.707 [2024-07-11 21:41:03.299373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.707 [2024-07-11 21:41:03.299404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.707 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.299535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.299562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.299672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.299700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.299889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.299917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.300050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.300078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.300250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.300277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.300394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.300422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.300578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.300607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.300778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.300808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.300962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.300988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.301139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.301168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.301329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.301356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.301468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.301495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.301626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.301653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.301774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.301801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.301927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.301958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.302105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.302142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.302280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.302308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.302449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.302476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.302648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.302675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.302813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.302851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.303019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.303053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.303192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.303219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.303379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.303406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.303534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.303563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.303715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.303744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.303911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.303937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.304092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.304119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.304284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.304314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.304438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.304464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.304570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.304597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.304758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.304785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.304925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.304954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.305105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.305131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.305266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.305293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.305423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.305449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.305607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.305636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.305790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.305817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.305927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.305954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.306079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.306106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.708 [2024-07-11 21:41:03.306260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.708 [2024-07-11 21:41:03.306289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.708 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.306420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.306448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.306579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.306606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.306764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.306798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.306955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.306982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.307119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.307146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.307282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.307326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.307468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.307498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.307620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.307649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.307783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.307811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.307918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.307945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.308107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.308134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.308296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.308326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.308472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.308499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.308612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.308638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.308771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.308798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.308911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.308939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.309115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.309143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.309276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.309303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.309460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.309490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.309616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.309645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.309804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.309831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.309940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.309968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.310103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.310130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.310273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.310303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.310459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.310486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.310594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.310621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.310822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.310850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.310981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.311008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.311146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.311173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.311281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.311312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.311463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.311492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.311633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.311663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.311815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.311843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.311977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.312004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.312170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.312197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.312354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.312381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.312553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.312580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.312687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.312715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.312872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.312913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.313031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.313068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.313205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.709 [2024-07-11 21:41:03.313232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.709 qpair failed and we were unable to recover it. 00:34:28.709 [2024-07-11 21:41:03.313399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.313430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.313566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.313596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.313761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.313792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.313955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.313984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.314096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.314131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.314248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.314275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.314440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.314471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.314611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.314656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.314818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.314846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.314981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.315128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.315295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.315474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.315668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.315806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.315945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.315976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.316114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.316141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.316247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.316274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.316377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.316404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.316538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.316565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.316671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.316713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.316876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.316903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.317044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.317071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.317171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.317197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.317328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.317355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.317509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.317538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.317692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.317721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.317892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.317919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.318051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.318078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.318239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.318268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.318439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.318469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.318605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.318633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.318794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.318821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.318926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.318953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.319108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.710 [2024-07-11 21:41:03.319140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.710 qpair failed and we were unable to recover it. 00:34:28.710 [2024-07-11 21:41:03.319320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.319348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.319535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.319564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.319680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.319709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.319844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.319872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.320962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.320989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.321152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.321179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.321325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.321352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.321480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.321510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.321659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.321688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.321838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.321866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.321981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.322017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.322166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.322193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.322337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.322367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.322518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.322549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.322661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.322692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.322852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.322880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.323010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.323053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.323200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.323230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.323409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.323436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.323593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.323623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.323771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.323825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.323961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.323987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.324094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.324120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.324234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.324260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.324392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.324420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.324589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.324616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.324763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.324790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.324928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.324954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.325057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.325086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.325257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.325284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.325463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.325489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.325599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.325626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.325796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.325824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.325929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.325956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.326096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.711 [2024-07-11 21:41:03.326123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.711 qpair failed and we were unable to recover it. 00:34:28.711 [2024-07-11 21:41:03.326229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.326272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.326431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.326458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.326583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.326613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.326730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.326763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.326893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.326920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.327021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.327048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.327182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.327209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.327315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.327341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.327467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.327493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.327654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.327696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.327867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.327894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.328004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.328031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.328168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.328194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.328360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.328390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.328503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.328533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.328654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.328683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.328868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.328895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.329062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.329254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.329410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.329587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.329746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.329891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.329993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.330020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.330159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.330188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.330371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.330399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.330553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.330580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.330702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.330731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.330859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.331016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.331043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.331188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.331214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.331361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.331389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.331571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.331598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.331705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.331731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.331880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.331910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.332008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.332035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.332167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.332193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.332298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.332325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.332456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.332487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.332699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.332728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.712 [2024-07-11 21:41:03.332882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2cf20 is same with the state(5) to be set 00:34:28.712 [2024-07-11 21:41:03.333032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.712 [2024-07-11 21:41:03.333072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.712 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.333229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.333262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.333390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.333421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.333601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.333633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.333781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.333828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.333959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.333986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.334182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.334239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.334415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.334450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.334611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.334672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.334807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.334836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.334958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.334987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.335116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.335148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.335291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.335321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.335470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.335507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.335651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.335681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.335862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.335891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.336023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.336076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.336288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.336337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.336463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.336513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.336687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.336718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.336887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.336915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.337039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.337078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.337233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.337264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.337403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.337433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.337550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.337581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.337728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.337765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.337916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.337944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.338054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.338082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.338190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.338217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.338403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.338433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.338576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.338606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.338736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.338771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.338930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.338957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.339108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.339137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.339264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.339293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.339432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.339462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.339605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.339646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.339805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.339837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.339963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.339991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.340128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.340156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.340310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.340357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.713 qpair failed and we were unable to recover it. 00:34:28.713 [2024-07-11 21:41:03.340505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.713 [2024-07-11 21:41:03.340532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.340664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.340692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.340836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.340863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.341008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.341036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.341201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.341243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.341376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.341406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.341516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.341550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.341710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.341740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.341894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.341921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.342054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.342098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.342306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.342356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.342473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.342517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.342652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.342679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.342832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.342878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.342981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.343009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.343148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.343176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.343333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.343360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.343517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.343544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.343676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.343704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.343841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.343888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.344022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.344065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.344222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.344267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.344401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.344429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.344551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.344578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.344737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.344777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.344939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.344984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.345175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.345220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.345366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.345411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.345547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.345574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.345702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.345728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.345904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.345950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.346079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.346106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.346237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.346281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.346437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.346468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.346574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.346601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.346702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.714 [2024-07-11 21:41:03.346729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.714 qpair failed and we were unable to recover it. 00:34:28.714 [2024-07-11 21:41:03.346883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.346928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.347082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.347111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.347225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.347257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.347402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.347430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.347577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.347629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.347767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.347807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.347914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.347940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.348092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.348120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.348223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.348250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.348380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.348426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.348572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.348618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.348758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.348798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.348920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.348951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.349067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.349095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.349212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.349258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.349387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.349419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.349609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.349636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.349771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.349807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.349929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.349959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.350143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.350172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.350300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.350330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.350531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.350577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.350679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.350707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.350843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.350889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.351063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.351094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.351212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.351242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.351381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.351410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.351560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.351590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.351709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.351739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.351916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.351945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.352091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.352121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.352265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.352294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.352439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.352468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.352601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.352630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.352779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.352818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.352921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.352948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.353100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.353145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.353274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.353301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.353411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.353440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.353548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.353576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.715 [2024-07-11 21:41:03.353718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.715 [2024-07-11 21:41:03.353748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.715 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.353917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.353945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.354094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.354121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.354274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.354305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.354461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.354491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.354611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.354642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.354801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.354829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.354956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.355002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.355161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.355204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.355414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.355465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.355589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.355616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.355725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.355758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.355912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.355958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.356066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.356093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.356226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.356253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.356414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.356441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.356551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.356589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.356724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.356770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.356947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.356976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.357090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.357118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.357250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.357277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.357431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.357458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.357612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.357638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.357795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.357822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.357948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.357998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.358177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.358222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.358327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.358355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.358457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.358485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.358620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.358647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.358759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.358787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.358918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.358945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.359102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.359128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.359317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.359361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.359497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.359524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.359652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.359679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.359843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.359892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.360036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.360080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.360228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.360273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.360415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.360442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.360581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.360609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.360769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.360809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.360991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.361040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.361228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.361260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.716 [2024-07-11 21:41:03.361433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.716 [2024-07-11 21:41:03.361463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.716 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.361631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.361683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.361862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.361892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.362034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.362063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.362182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.362212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.362381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.362430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.362584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.362612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.362720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.362747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.362916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.362943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.363069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.363097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.363238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.363265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.363409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.363436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.363540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.363568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.363737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.363771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.363912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.363942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.364067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.364095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.364225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.364269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.364428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.364456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.364557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.364584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.364714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.364741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.364942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.364988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.365166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.365217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.365373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.365403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.365553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.365580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.365689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.365717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.365893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.365942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.366064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.366104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.366269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.366313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.366447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.366474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.366577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.366604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.366763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.366803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.366951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.366995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.367181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.367226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.367385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.367413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.367535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.367562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.367693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.367721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.367875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.367902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.368012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.368039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.368167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.368194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.368325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.368352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.368467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.368494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.368654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.368681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.368812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.368842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.369036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.369082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.717 [2024-07-11 21:41:03.369266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.717 [2024-07-11 21:41:03.369298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.717 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.369431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.369459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.369561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.369588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.369749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.369781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.369894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.369921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.370062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.370091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.370210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.370255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.370396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.370426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.370563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.370593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.370796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.370837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.370972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.371024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.371186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.371232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.371367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.371412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.371546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.371573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.371708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.371735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.371884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.371927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.372048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.372092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.372247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.372296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.372455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.372486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.372632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.372661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.372807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.372835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.372963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.372990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.373106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.373135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.373282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.718 [2024-07-11 21:41:03.373309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.718 qpair failed and we were unable to recover it. 00:34:28.718 [2024-07-11 21:41:03.373475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.373504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.373621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.373649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.373776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.373803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.373941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.373968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.374153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.374184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.374354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.374385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.374608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.374638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.374826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.374853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.374987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.375029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.375153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.375197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.375342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.375372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.375515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.375544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.375706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.375732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.375889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.375930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.376074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.376103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.376261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.376291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.376442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.376472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.376643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.376672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.376856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.376885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.376996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.719 [2024-07-11 21:41:03.377023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.719 qpair failed and we were unable to recover it. 00:34:28.719 [2024-07-11 21:41:03.377160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.377188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.377337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.377367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.377505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.377536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.377711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.377741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.377874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.377900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.378030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.378056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.378191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.378219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.378366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.378395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.378545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.378577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.378733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.378767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.378926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.378953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.379062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.379091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.379226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.379255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.379390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.379454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.379618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.379650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.379840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.379867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.380041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.380070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.380279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.380333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.380481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.380510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.380679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.380708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.380866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.380892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.381068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.381098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.381272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.381331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.381450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.381494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.381639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.381668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.381821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.381848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.382006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.382033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.382189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.382218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.382386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.382415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.382581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.382612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.382760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.382805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.382964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.382991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.383146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.383175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.383320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.383349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.383499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.383528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.383700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.383725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.383843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.383880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.384050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.384091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.384253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.384311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.384445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.384492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.384635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.384664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.384796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.384823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.384994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.385021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.385152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.385180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.385342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.385370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.385481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.385510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.385658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.720 [2024-07-11 21:41:03.385685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.720 qpair failed and we were unable to recover it. 00:34:28.720 [2024-07-11 21:41:03.385839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.385880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.386004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.386034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.386151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.386179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.386437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.386490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.386635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.386665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.386823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.386851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.387035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.387070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.387242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.387272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.387517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.387567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.387736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.387777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.387958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.387986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.388122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.388150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.388325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.388356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.388478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.388521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.388670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.388700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.388834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.388863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.389027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.389055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.389166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.389193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.389404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.389458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.389605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.389635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.389826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.389854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.389990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.390018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.390168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.390212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.390320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.390349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.390500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.390547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.390656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.390684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.390844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.390889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.391045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.391089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.391271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.391315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.391510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.391567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.391740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.391793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.391944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.391989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.392140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.392184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.392323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.392350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.392477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.392504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.392643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.392672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.392854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.392903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.393063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.393108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.393239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.393267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.393427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.393455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.393560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.393587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.393714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.393741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.393932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.393979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.394138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.394181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.394331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.394361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.394491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.394519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.394676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.394708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.721 [2024-07-11 21:41:03.394882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.721 [2024-07-11 21:41:03.394926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.721 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.395113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.395145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.395261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.395290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.395406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.395445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.395667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.395721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.395917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.395948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.396093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.396133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.396289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.396328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.396480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.396519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.396672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.396700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.396829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.396870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.397042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.397070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.397180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.397209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.397382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.397412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.397585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.397615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.397781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.397809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.397969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.397996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.398144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.398174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.398347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.398376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.398538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.398589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.398731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.398774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.398924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.398950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.399112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.399141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.399314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.399343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.399458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.399487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.399648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.399677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.399793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.399821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.399996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.400041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.400181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.400244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.400354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.400382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.400519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.400547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.400713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.400740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.400901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.400946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.401124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.401154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.401359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.401389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.401528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.401555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.401679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.401706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.401843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.401874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.401995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.402024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.402197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.402232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.402487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.402538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.402665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.402691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.402912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.402941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.403049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.403080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.403191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.403221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.403360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.403390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.403539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.403566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.403698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.722 [2024-07-11 21:41:03.403724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.722 qpair failed and we were unable to recover it. 00:34:28.722 [2024-07-11 21:41:03.403861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.403889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.404047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.404076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.404218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.404247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.404421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.404451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.404591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.404621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.404743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.404796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.404930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.404958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.405143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.405191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.405344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.405375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.405540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.405571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.405722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.405749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.405908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.405953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.406080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.406133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.406286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.406330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.406485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.406513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.406680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.406707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.406878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.406910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.407024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.407054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.407196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.407225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.407433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.407485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.407657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.407687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.407890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.407931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.408119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.408164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.408317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.408361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.408518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.408563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.408697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.408725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.408898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.408942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.409071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.409115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.409332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.409385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.409494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.409521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.409678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.409705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.409876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.409922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.410070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.410100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.410248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.410279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.410513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.410565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.410719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.410749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.410917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.410943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.411095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.411126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.411304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.411334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.411504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.411534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.723 [2024-07-11 21:41:03.411692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.723 [2024-07-11 21:41:03.411721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.723 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.411890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.411919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.412053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.412082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.412267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.412315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.412460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.412508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.412617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.412645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.412750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.412790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.412971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.413014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.413200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.413245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.413370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.413415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.413525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.413552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.413711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.413737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.413902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.413930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.414057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.414084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.414252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.414279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.414412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.414439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.414575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.414602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.414735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.414777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.414933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.414962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.415095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.415122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.415282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.415310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.415440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.415469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.415628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.415655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.415821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.415849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.416030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.416060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.416175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.416207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.416357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.416388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.416589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.416635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.416776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.416803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.416959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.417005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.417183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.417227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.417353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.417401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.417540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.417569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.417732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.417766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.417950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.417995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.418174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.418218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.418406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.418436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.418589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.418616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.418718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.418746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.418882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.418926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.419081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.419125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.419277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.419322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.419457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.419484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.419619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.419645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.419775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.419803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.419957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.420002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.420185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.420228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.420355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.420382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.420484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.420513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.420672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.420699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.724 qpair failed and we were unable to recover it. 00:34:28.724 [2024-07-11 21:41:03.420845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.724 [2024-07-11 21:41:03.420875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.421046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.421090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.421249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.421294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.421426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.421454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.421586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.421614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.421745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.421777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.421935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.421980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.422168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.422214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.422381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.422408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.422547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.422574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.422709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.422736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.422924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.422973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.423127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.423171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.423350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.423395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.423523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.423551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.423662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.423689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.423839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.423887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.424017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.424061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.424215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.424264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.424420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.424448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.424576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.424603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.424735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.424773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.424937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.424964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.425126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.425153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.425315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.425343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.425481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.425509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.425677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.425705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.425871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.425900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.426059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.426089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.426279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.426321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.426453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.426480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.426610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.426638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.426815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.426845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.427017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.427061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.427179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.427208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.427384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.427411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.427521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.427549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.427688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.427715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.427870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.427919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.428088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.428116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.428272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.428298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.428427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.428453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.428610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.428637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.428745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.428781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.428937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.428964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.429121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.429167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.429316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.429359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.429496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.429523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.429674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.429715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.429867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.429897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.725 [2024-07-11 21:41:03.429999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.725 [2024-07-11 21:41:03.430043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.725 qpair failed and we were unable to recover it. 00:34:28.726 [2024-07-11 21:41:03.430214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:28.726 [2024-07-11 21:41:03.430244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:28.726 qpair failed and we were unable to recover it. 00:34:28.726 [2024-07-11 21:41:03.430391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.012 [2024-07-11 21:41:03.430419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-11 21:41:03.430576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.012 [2024-07-11 21:41:03.430601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-11 21:41:03.430736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.012 [2024-07-11 21:41:03.430771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.012 qpair failed and we were unable to recover it. 00:34:29.012 [2024-07-11 21:41:03.430933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.430979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.431137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.431166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.431387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.431436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.431593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.431619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.431748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.431781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.431894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.431920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.432074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.432105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.432241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.432267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.432402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.432428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.432535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.432561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.432708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.432734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.432884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.432911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.433961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.433986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.434086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.434112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.434267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.434300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.434447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.434475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.434589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.434616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.434764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.434793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.434973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.435018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.435137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.435181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.435362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.435407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.435513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.435542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.435677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.435704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.435860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.435906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.436054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.436098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.436217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.436261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.436392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.436420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.436531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.436559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.436715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.436742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.013 qpair failed and we were unable to recover it. 00:34:29.013 [2024-07-11 21:41:03.436877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.013 [2024-07-11 21:41:03.436904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.437056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.437087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.437225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.437270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.437424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.437452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.437576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.437603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.437764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.437792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.437972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.438015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.438134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.438177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.438339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.438367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.438533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.438562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.438723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.438750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.438905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.438931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.439118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.439147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.439290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.439320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.439488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.439518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.439666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.439694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.439852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.439898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.440028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.440071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.440243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.440272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.440454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.440498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.440661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.440688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.440849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.440880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.441001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.441030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.441170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.441200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.441340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.441370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.441553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.441580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.441713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.441740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.441927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.441956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.442102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.442132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.442275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.014 [2024-07-11 21:41:03.442305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.014 qpair failed and we were unable to recover it. 00:34:29.014 [2024-07-11 21:41:03.442474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.442504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.442650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.442679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.442859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.442887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.443033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.443062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.443204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.443233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.443378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.443408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.443563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.443611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.443739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.443774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.443931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.443962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.444113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.444157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.444309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.444353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.444477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.444509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.444658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.444685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.444841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.444890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.445042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.445087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.445264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.445309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.445450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.445477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.445635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.445662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.445848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.445880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.446025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.446055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.446199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.446230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.446376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.446406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.446557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.446587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.446727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.446763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.446932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.446977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.447133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.447177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.447342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.447370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.447498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.447526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.447660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.447687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.447836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.447881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.448040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.448070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.448238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.448288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.448432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.448459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.448597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.448626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.448805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.448835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.448954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.448984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.449123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.449153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.449315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.449344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.449491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.449521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.015 [2024-07-11 21:41:03.449649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.015 [2024-07-11 21:41:03.449678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.015 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.449835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.449862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.449983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.450014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.450215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.450261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.450439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.450484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.450642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.450669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.450799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.450826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.450972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.451017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.451172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.451217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.451414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.451445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.451553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.451581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.451738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.451773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.451934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.451963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.452132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.452162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.452302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.452332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.452456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.452484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.452622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.452649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.452832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.452863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.452991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.453021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.453191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.453218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.453368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.453413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.453541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.453568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.453672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.453700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.453892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.453940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.454127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.454173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.454361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.454389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.454521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.454548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.454683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.454711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.454855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.454885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.455047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.455072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.455187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.455218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.455388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.455417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.455585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.455615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.455762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.455805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.455952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.455982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.456107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.456137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.456314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.456344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.456513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.456542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.456690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.456717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.456856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.456883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.016 [2024-07-11 21:41:03.457017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.016 [2024-07-11 21:41:03.457044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.016 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.457223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.457253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.457430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.457459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.457605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.457635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.457779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.457823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.457954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.457980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.458129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.458159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.458312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.458357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.458560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.458590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.458728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.458768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.458919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.458945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.459082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.459108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.459261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.459291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.459434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.459464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.459684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.459711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.459857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.459884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.460014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.460057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.460226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.460256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.460429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.460459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.460633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.460662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.460814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.460842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.460972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.460999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.461109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.461136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.461321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.461350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.461459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.461488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.461611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.461641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.461786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.461814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.461970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.461996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.462146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.462175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.462345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.462374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.462545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.462575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.462684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.462714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.017 [2024-07-11 21:41:03.462862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.017 [2024-07-11 21:41:03.462890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.017 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.463025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.463053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.463237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.463266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.463408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.463437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.463612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.463642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.463785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.463830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.463965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.463992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.464164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.464193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.464335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.464365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.464484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.464526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.464694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.464724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.464904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.464931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.465070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.465097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.465249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.465278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.465497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.465526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.465707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.465736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.465936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.465963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.466119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.466150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.466283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.466310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.466419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.466446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.466608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.466637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.466814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.466842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.466975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.467020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.467171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.467200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.467332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.467360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.467497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.467524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.467675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.467704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.467888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.467916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.468129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.468159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.468302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.468343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.468476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.468504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.468687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.468717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.468894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.468924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.469082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.469110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.469268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.469310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.469489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.469516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.469665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.469695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.469856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.469884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.470040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.470067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.470256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.470282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.018 [2024-07-11 21:41:03.470420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.018 [2024-07-11 21:41:03.470446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.018 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.470580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.470607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.470779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.470806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.470918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.470946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.471142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.471186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.471347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.471376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.471512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.471540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.471671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.471698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.471823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.471851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.472030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.472060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.472210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.472239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.472389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.472416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.472571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.472598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.472764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.472809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.472946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.472972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.473147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.473177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.473321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.473352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.473509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.473536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.473675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.473720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.473857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.473884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.474042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.474069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.474180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.474207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.474365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.474394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.474554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.474581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.474718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.474745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.474864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.474891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.475025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.475054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.475183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.475211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.475395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.475425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.475579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.475606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.475714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.475742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.475929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.475969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.476136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.476165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.476316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.476348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.476577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.476624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.476740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.476774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.476930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.476958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.477226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.477275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.477451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.477478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.477631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.477661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.477802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.477849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.019 qpair failed and we were unable to recover it. 00:34:29.019 [2024-07-11 21:41:03.477953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.019 [2024-07-11 21:41:03.477980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.478115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.478142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.478311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.478338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.478472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.478503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.478657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.478687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.478849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.478879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.479051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.479078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.479202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.479232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.479377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.479407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.479590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.479641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.479785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.479830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.479964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.479991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.480161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.480187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.480285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.480310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.480455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.480484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.480641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.480668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.480865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.480893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.481028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.481071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.481251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.481278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.481432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.481461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.481622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.481653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.481840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.481867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.482001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.482045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.482229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.482256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.482392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.482420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.482528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.482571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.482693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.482724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.482864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.482891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.483064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.483093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.483330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.483382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.483555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.483589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.483729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.483770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.483924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.483952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.484086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.484113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.484215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.484241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.484369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.484396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.484502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.484530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.484662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.484704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.484889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.484918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.485057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.485085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.485212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.485257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.020 qpair failed and we were unable to recover it. 00:34:29.020 [2024-07-11 21:41:03.485407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.020 [2024-07-11 21:41:03.485434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.485571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.485598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.485732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.485784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.485943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.485970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.486103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.486130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.486240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.486267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.486391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.486419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.486629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.486657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.486780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.486823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.486959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.486986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.487111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.487138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.487273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.487317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.487545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.487598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.487783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.487811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.487965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.487995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.488144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.488175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.488360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.488387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.488536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.488566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.488726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.488761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.488895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.488922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.489060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.489105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.489290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.489322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.489456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.489483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.489615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.489642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.489804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.489848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.489956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.489983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.490113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.490139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.490295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.490325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.490507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.490534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.490689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.490716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.490862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.490892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.491068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.491109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.491273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.491302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.491453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.491483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.491735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.491790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.491926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.491953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.492086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.492114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.021 qpair failed and we were unable to recover it. 00:34:29.021 [2024-07-11 21:41:03.492308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.021 [2024-07-11 21:41:03.492353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.492460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.492487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.492657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.492698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.492835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.492864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.492962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.492989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.493112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.493142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.493371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.493420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.493596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.493627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.493814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.493841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.493963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.493989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.494147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.494177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.494324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.494354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.494494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.494524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.494684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.494711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.494875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.494903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.495081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.495111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.495260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.495290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.495500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.495530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.495683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.495714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.495848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.495876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.495997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.496023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.496185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.496238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.496398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.496425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.496561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.496590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.496759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.496790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.496915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.496942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.497062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.497089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.497210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.497240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.497386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.497415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.497581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.497637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.497802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.497832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.497966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.497993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.498129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.498156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.498280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.498321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.498502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.498535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.498683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.498714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.498869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.498896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.499025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.499069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.499182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.022 [2024-07-11 21:41:03.499215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.022 qpair failed and we were unable to recover it. 00:34:29.022 [2024-07-11 21:41:03.499473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.499528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.499695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.499725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.499865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.499893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.500029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.500059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.500211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.500241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.500412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.500442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.500559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.500590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.500730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.500767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.500922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.500949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.501104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.501130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.501231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.501273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.501423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.501467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.501696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.501726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.501888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.501915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.502047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.502076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.502202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.502244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.502366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.502396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.502529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.502573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.502708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.502735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.502877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.502904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.503014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.503060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.503191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.503236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.503396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.503438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.503555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.503585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.503718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.503744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.503883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.503910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.504036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.504219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.504388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.504569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.504723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.504867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.504999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.505025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.505204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.505249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.505376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.505422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.505538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.505568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.505743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.505776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.505940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.505967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.506101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.506155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.506309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.506341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.506478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.023 [2024-07-11 21:41:03.506522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.023 qpair failed and we were unable to recover it. 00:34:29.023 [2024-07-11 21:41:03.506666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.506698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.506834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.506862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.507019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.507046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.507200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.507230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.507367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.507397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.507570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.507601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.507756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.507788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.507953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.507994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.508117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.508164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.508320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.508367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.508523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.508568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.508698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.508726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.508864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.508909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.509042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.509070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.509185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.509229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.509386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.509434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.509589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.509616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.509746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.509781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.509910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.509941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.510105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.510135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.510278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.510327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.510461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.510489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.510632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.510659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.510822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.510868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.510998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.511049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.511205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.511251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.511390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.511418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.511527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.511555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.511715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.511743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.511871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.511901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.512059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.512102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.512228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.512258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.512432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.512460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.512613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.512640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.512751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.512788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.512896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.512924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.513082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.513109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.513218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.513246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.513376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.513404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.513534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.513561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.024 [2024-07-11 21:41:03.513692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.024 [2024-07-11 21:41:03.513719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.024 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.513887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.513915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.514065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.514095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.514248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.514278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.514407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.514435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.514541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.514569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.514705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.514733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.514882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.514927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.515085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.515128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.515244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.515290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.515445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.515472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.515627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.515655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.515784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.515813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.515946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.515974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.516098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.516126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.516236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.516264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.516391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.516418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.516575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.516603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.516726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.516763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.516931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.516976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.517119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.517162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.517285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.517312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.517424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.517451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.517588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.517617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.517781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.517809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.517941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.517968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.518126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.518153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.518286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.518313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.518440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.518468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.518586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.518617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.518760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.518798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.518972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.519003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.519130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.519160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.519302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.519332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.519457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.519487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.519658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.519685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.519795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.519821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.519985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.520012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.520153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.520198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.520368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.520398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.520537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.520567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.025 qpair failed and we were unable to recover it. 00:34:29.025 [2024-07-11 21:41:03.520722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.025 [2024-07-11 21:41:03.520749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.520863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.520896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.521003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.521031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.521159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.521190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.521428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.521472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.521624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.521656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.521812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.521845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.521977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.522003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.522145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.522175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.522341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.522371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.522610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.522659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.522798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.522826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.522958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.522985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.523099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.523126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.523275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.523304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.523501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.523552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.523669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.523699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.523829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.523857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.523993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.524019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.524130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.524172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.524321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.524350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.524569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.524598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.524715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.524744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.524884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.524912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.525044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.525071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.525220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.525249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.525368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.525396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.525587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.525646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.525791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.525821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.525932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.525960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.526088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.526134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.526290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.526346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.526496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.526541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.526654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.526686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.526849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.526876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.526994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.026 [2024-07-11 21:41:03.527039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.026 qpair failed and we were unable to recover it. 00:34:29.026 [2024-07-11 21:41:03.527170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.527217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.527364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.527391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.527535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.527562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.527695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.527724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.527866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.527894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.528055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.528084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.528207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.528237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.528368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.528394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.528550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.528580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.528764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.528791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.528917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.528944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.529079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.529107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.529227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.529271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.529431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.529458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.529587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.529618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.529770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.529798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.529916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.529942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.530103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.530132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.530365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.530395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.530551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.530581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.530702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.530732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.530938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.530978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.531105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.531151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.531303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.531334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.531484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.531535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.531669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.531697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.531874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.531920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.532045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.532091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.532245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.532289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.532412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.532444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.532593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.532622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.532762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.532790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.532913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.532942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.533073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.533115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.533260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.533291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.533430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.533460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.533620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.533647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.533792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.533833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.533993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.534024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.534161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.534192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.534342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.027 [2024-07-11 21:41:03.534373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.027 qpair failed and we were unable to recover it. 00:34:29.027 [2024-07-11 21:41:03.534551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.534580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.534776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.534821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.534955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.534985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.535107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.535150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.535292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.535322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.535470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.535500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.535618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.535648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.535847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.535888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.536045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.536092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.536226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.536271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.536430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.536460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.536587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.536617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.536763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.536807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.536959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.536989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.537136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.537166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.537292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.537336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.537495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.537525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.537670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.537700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.537823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.537850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.537971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.538001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.538112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.538141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.538250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.538279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.538403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.538450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.538599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.538627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.538816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.538861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.539015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.539044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.539223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.539268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.539405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.539451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.539589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.539616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.539731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.539770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.539934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.539963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.540113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.540140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.540273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.540301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.540426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.540453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.540609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.540636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.540783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.540823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.540940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.540968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.541101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.541148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.541305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.541335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.541503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.541548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.541705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.028 [2024-07-11 21:41:03.541733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.028 qpair failed and we were unable to recover it. 00:34:29.028 [2024-07-11 21:41:03.541862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.541890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.541995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.542021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.542170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.542200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.542345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.542374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.542523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.542553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.542704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.542731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.542841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.542869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.542975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.543004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.543129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.543175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.543312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.543357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.543493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.543521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.543656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.543683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.543836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.543866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.543984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.544015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.544179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.544239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.544482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.544534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.544685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.544715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.544844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.544890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.545045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.545075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.545222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.545252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.545393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.545423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.545600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.545630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.545765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.545793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.545955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.546002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.546157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.546203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.546357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.546387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.546506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.546533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.546695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.546722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.546856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.546884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.547043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.547070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.547227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.547272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.547400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.547427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.547639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.547666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.547851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.547882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.548046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.548092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.548244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.548271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.548394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.548425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.548528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.548556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.548684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.548712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.548908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.548941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.029 [2024-07-11 21:41:03.549058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.029 [2024-07-11 21:41:03.549088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.029 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.549206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.549235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.549343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.549373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.549517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.549548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.549716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.549767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.549924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.549956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.550115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.550141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.550376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.550434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.550591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.550618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.550731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.550762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.550909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.550936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.551070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.551100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.551247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.551278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.551398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.551427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.551586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.551615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.551730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.551768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.551992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.552042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.552190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.552234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.552397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.552440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.552546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.552571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.552730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.552765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.552922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.552949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.553051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.553079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.553190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.553217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.553429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.553456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.553589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.553616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.553746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.553780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.553963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.554013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.554164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.554194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.554354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.554405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.554614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.554641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.554772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.554801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.554949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.554995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.555116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.555162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.555299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.555327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.555461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.555488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.030 [2024-07-11 21:41:03.555592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.030 [2024-07-11 21:41:03.555625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.030 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.555760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.555788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.555936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.555982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.556112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.556157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.556294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.556321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.556463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.556490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.556624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.556651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.556790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.556831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.556971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.556999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.557130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.557157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.557302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.557330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.557448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.557475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.557588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.557615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.557747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.557783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.557922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.557966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.558119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.558152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.558329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.558376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.558502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.558532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.558702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.558729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.558866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.559017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.559048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.559194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.559225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.559343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.559373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.559512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.559541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.559662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.559689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.559828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.559856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.560008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.560038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.560152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.560188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.560368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.560398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.560543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.560575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.560693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.560723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.560887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.560915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.561029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.561059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.561195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.561225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.561348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.561379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.561537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.561564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.561691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.561719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.561841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.561871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.562026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.562071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.562228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.562274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.562515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.031 [2024-07-11 21:41:03.562568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.031 qpair failed and we were unable to recover it. 00:34:29.031 [2024-07-11 21:41:03.562751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.562783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.562934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.562965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.563134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.563185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.563336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.563381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.563538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.563565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.563699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.563726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.563899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.563930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.564067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.564111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.564330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.564383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.564514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.564542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.564670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.564697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.564829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.564874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.565014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.565042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.565204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.565249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.565387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.565414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.565582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.565609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.565723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.565750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.565935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.565979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.566135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.566178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.566284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.566312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.566428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.566457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.566606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.566635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.566759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.566803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.566996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.567024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.567122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.567148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.567305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.567333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.567480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.567517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.567676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.567707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.567914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.567944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.568091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.568122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.568304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.568334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.568485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.568525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.568704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.568733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.568864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.568904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.569025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.569072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.569215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.569245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.569403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.569433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.569574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.569604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.569764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.569791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.569927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.569953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.570151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.032 [2024-07-11 21:41:03.570181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.032 qpair failed and we were unable to recover it. 00:34:29.032 [2024-07-11 21:41:03.570309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.570339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.570484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.570513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.570654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.570685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.570831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.570858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.571006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.571036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.571157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.571187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.571304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.571333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.571475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.571505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.571680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.571710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.571873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.571900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.572022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.572052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.572160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.572191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.572305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.572339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.572539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.572569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.572710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.572740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.572869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.572896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.573959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.573999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.574157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.574203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.574388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.574432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.574562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.574608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.574747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.574780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.574938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.574984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.575125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.575153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.575309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.575340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.575455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.575484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.575611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.575638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.575769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.575796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.575919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.575948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.576125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.576155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.576272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.576301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.576451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.576480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.576605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.576634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.576749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.576783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.576917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.576948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.577056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.033 [2024-07-11 21:41:03.577085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.033 qpair failed and we were unable to recover it. 00:34:29.033 [2024-07-11 21:41:03.577221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.577248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.577368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.577415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.577546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.577574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.577685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.577713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.577845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.577887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.578056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.578116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.578232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.578262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.578420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.578449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.578595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.578622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.578722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.578747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.578891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.578918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.579045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.579074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.579196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.579226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.579373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.579403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.579549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.579578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.579719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.579749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.579914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.579942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.580109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.580154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.580311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.580356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.580491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.580535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.580666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.580694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.580826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.580853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.581014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.581061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.581216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.581261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.581415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.581460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.581671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.581703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.581859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.581890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.582035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.582065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.582202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.582232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.582394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.582421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.582547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.582577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.582726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.582767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.582982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.583013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.583143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.583192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.583329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.583360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.034 [2024-07-11 21:41:03.583558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.034 [2024-07-11 21:41:03.583589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.034 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.583767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.583812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.583946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.583973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.584158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.584206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.584431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.584475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.584608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.584636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.584768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.584796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.584930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.584975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.585155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.585185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.585321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.585352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.585500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.585530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.585672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.585702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.585824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.585851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.586010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.586054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.586198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.586228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.586405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.586467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.586607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.586637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.586780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.586829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.586966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.586993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.587167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.587197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.587341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.587371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.587550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.587581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.587728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.587775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.587929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.587956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.588114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.588140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.588314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.588344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.588457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.588487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.588654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.588700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.588835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.588863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.588970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.589013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.589160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.589189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.589316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.589347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.589516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.589546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.589695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.589723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.589867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.589894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.590044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.590074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.590243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.590272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.590425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.590455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.590595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.590624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.590792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.590833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.590971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.590999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.035 qpair failed and we were unable to recover it. 00:34:29.035 [2024-07-11 21:41:03.591123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.035 [2024-07-11 21:41:03.591154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.591319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.591364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.591484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.591528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.591646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.591679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.591805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.591833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.591990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.592121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.592279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.592457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.592612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.592778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.592938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.592966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.593144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.593174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.593413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.593444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.593569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.593596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.593751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.593784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.593892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.593919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.594062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.594106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.594277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.594307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.594577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.594629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.594794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.594822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.594967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.594994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.595144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.595174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.595321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.595351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.595520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.595550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.595699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.595726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.595871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.595899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.596046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.596076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.596209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.596254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.596387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.596433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.596575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.596609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.596730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.596764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.596902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.596930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.597086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.597115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.597249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.597294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.597465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.597495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.597666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.597696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.597825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.597852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.597989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.598016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.598175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.598205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.598416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.598446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.036 qpair failed and we were unable to recover it. 00:34:29.036 [2024-07-11 21:41:03.598581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.036 [2024-07-11 21:41:03.598611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.598735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.598771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.598924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.598952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.599070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.599098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.599231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.599258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.599406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.599435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.599581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.599611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.599750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.599812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.599941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.599968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.600098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.600126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.600302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.600333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.600498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.600527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.600708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.600735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.600880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.600907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.601041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.601068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.601170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.601197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.601331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.601361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.601572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.601602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.601737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.601790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.601911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.601938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.602070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.602112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.602285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.602314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.602461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.602490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.602633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.602663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.602852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.602880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.603011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.603056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.603240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.603267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.603420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.603450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.603593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.603623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.603774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.603802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.603952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.603991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.604125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.604153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.604311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.604342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.604569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.604626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.604774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.604819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.604953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.604981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.605123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.605152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.605291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.605321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.605472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.605502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.605683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.605710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.605854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.037 [2024-07-11 21:41:03.605881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.037 qpair failed and we were unable to recover it. 00:34:29.037 [2024-07-11 21:41:03.605994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.606021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.606128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.606156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.606257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.606289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.606479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.606509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.606627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.606657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.606813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.606841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.607011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.607041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.607186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.607216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.607331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.607361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.607530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.607560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.607715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.607742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.607885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.607912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.608936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.608963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.609097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.609127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.609284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.609311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.609436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.609464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.609597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.609625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.609737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.609770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.609907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.609934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.610069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.610100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.610240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.610267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.610405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.610434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.610564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.610591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.610736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.610769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.610873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.610902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.611041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.611069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.611202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.611231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.611368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.611396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.611498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.611525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.611686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.611713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.611847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.611875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.612014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.612041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.612177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.612352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.612381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.038 [2024-07-11 21:41:03.612537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.038 [2024-07-11 21:41:03.612565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.038 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.612669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.612696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.612802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.612830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.612991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.613019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.613178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.613205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.613328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.613355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.613490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.613518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.613653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.613688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.613805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.613831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.613993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.614153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.614313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.614472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.614635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.614796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.614956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.614983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.615123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.615151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.615264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.615292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.615390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.615418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.615522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.615549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.615650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.615679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.615837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.615894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.616022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.616053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.616207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.616235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.616369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.616398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.616552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.616582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.616764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.616792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.616896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.616925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.617072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.617100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.617232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.617268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.617383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.617410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.617553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.617581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.617716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.617743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.617887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.617915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.618040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.618068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.618175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.039 [2024-07-11 21:41:03.618203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.039 qpair failed and we were unable to recover it. 00:34:29.039 [2024-07-11 21:41:03.618341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.618369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.618516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.618544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.618669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.618696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.618882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.618923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.619114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.619273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.619458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.619596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.619766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.619900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.619997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.620024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.620178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.620205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.620339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.620368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.620528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.620556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.620694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.620722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.620870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.620899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.621035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.621062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.621170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.621197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.621352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.621379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.621514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.621542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.621737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.621805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.621947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.621975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.622136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.622163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.622290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.622320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.622514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.622545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.622689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.622719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.622874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.622901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.623059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.623086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.623213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.623243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.623390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.623421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.623605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.623636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.623759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.623806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.623967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.623994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.624242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.624273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.624423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.624453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.624589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.624633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.624779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.624822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.624970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.624997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.625154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.625184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.625350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.625380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.625519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.040 [2024-07-11 21:41:03.625549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.040 qpair failed and we were unable to recover it. 00:34:29.040 [2024-07-11 21:41:03.625679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.625706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.625846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.625874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.626005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.626048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.626193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.626223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.626427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.626457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.626572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.626602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.626748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.626804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.626906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.626935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.627070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.627112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.627231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.627261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.627425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.627455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.627605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.627635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.627822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.627850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.627982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.628009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.628160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.628190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.628329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.628359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.628556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.628586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.628733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.628772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.628922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.628950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.629085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.629112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.629252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.629279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.629429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.629461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.629628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.629658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.629822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.629849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.629957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.629984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.630113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.630140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.630313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.630343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.630509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.630539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.630690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.630718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.630857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.630885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.630996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.631023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.631173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.631216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.631356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.631386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.631530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.631566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.631722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.631750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.631893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.631920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.632022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.632049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.632227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.632257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.632393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.632424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.632592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.632622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.041 [2024-07-11 21:41:03.632763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.041 [2024-07-11 21:41:03.632791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.041 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.632922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.632949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.633121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.633167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.633336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.633392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.633568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.633599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.633785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.633814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.633969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.633996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.634117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.634147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.634291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.634323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.634470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.634501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.634682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.634713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.634865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.634905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.635054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.635112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.635270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.635315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.635461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.635506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.635643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.635670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.635806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.635834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.635958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.636004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.636160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.636205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.636322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.636352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.636505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.636538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.636684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.636724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.636868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.636897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.637069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.637099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.637303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.637354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.637529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.637558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.637744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.637780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.637916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.637943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.638100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.638128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.638263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.638290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.638501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.638528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.638701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.638728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.638841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.638869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.639016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.639046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.639223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.639254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.639374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.639404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.639550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.639581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.639730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.639764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.639906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.639933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.640060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.640088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.640192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.640221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.042 [2024-07-11 21:41:03.640380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.042 [2024-07-11 21:41:03.640407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.042 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.640552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.640582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.640701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.640919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.640948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.641115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.641156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.641320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.641366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.641529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.641580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.641733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.641767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.641926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.641953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.642077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.642121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.642282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.642309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.642441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.642468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.642601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.642628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.642787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.642815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.642927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.642954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.643114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.643141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.643244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.643271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.643399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.643427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.643553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.643580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.643718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.643750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.643907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.643934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.644032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.644059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.644219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.644247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.644381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.644408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.644540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.644568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.644704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.644732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.644857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.644888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.645056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.645100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.645248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.645293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.645431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.645459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.645591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.645618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.645760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.645788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.645938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.645984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.646174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.646220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.646355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.646382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.043 [2024-07-11 21:41:03.646538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.043 [2024-07-11 21:41:03.646565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.043 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.646675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.646702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.646832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.646861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.646961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.646988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.647101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.647129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.647233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.647261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.647393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.647419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.647578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.647605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.647751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.647783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.647920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.647946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.648078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.648105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.648271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.648299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.648425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.648452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.648558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.648585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.648714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.648741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.648858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.648886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.649043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.649071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.649232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.649259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.649355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.649382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.649518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.649546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.649683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.649710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.649855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.649883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.650041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.650068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.650225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.650252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.650362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.650393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.650498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.650525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.650692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.650719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.650861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.650891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.651121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.651165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.651353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.651397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.651556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.651584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.651743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.651777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.651933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.651978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.652099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.652130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.652328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.652372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.652535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.652562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.652692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.652719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.652928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.652973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.653129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.653161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.653304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.653334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.653486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.653517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.044 [2024-07-11 21:41:03.653662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.044 [2024-07-11 21:41:03.653692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.044 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.653818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.653846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.654023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.654053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.654210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.654240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.654359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.654389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.654562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.654608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.654766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.654793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.654936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.654981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.655163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.655207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.655327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.655358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.655543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.655583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.655713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.655741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.655881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.655909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.656055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.656084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.656320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.656370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.656514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.656545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.656686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.656713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.656851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.656878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.657056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.657086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.657209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.657252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.657425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.657454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.657611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.657638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.657795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.657822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.657980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.658007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.658151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.658183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.658361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.658391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.658535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.658565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.658715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.658745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.658904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.658931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.659061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.659088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.659248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.659279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.659412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.659456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.659627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.659657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.659777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.659821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.659976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.660003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.660149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.660179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.660334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.660364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.660504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.660539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.660711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.660741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.660900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.660927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.661084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.661111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.045 [2024-07-11 21:41:03.661401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.045 [2024-07-11 21:41:03.661454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.045 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.661596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.661627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.661782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.661810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.661946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.661973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.662135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.662180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.662472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.662523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.662695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.662725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.662851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.662879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.663026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.663056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.663251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.663313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.663455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.663486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.663632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.663673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.663865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.663906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.664056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.664087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.664229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.664259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.664531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.664581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.664750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.664802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.664961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.664988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.665250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.665300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.665438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.665468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.665617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.665646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.665825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.665866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.666030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.666059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.666170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.666204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.666424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.666469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.666567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.666595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.666739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.666786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.666970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.667001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.667138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.667168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.667290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.667320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.667538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.667591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.667741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.667779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.667958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.668240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.668291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.668419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.668463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.668576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.668605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.668767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.668796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.668961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.668988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.669165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.669210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.669437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.669480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.669604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.669631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.046 qpair failed and we were unable to recover it. 00:34:29.046 [2024-07-11 21:41:03.669792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.046 [2024-07-11 21:41:03.669819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.669973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.670019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.670177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.670221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.670325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.670354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.670465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.670493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.670627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.670654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.670815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.670860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.671972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.671999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.672125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.672152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.672287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.672315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.672465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.672506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.672622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.672650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.672804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.672835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.672982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.673013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.673187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.673217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.673439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.673469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.673645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.673676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.673831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.673859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.673993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.674036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.674176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.674207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.674345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.674375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.674508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.674551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.674727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.674760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.674874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.674901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.675063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.675090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.675239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.675267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.675420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.675450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.675596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.675625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.675771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.675814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.675941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.047 [2024-07-11 21:41:03.675967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.047 qpair failed and we were unable to recover it. 00:34:29.047 [2024-07-11 21:41:03.676178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.676207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.676440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.676469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.676605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.676635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.676787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.676814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.676950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.676977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.677080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.677124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.677234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.677263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.677472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.677501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.677646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.677676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.677857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.677898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.678016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.678044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.678180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.678208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.678360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.678390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.678626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.678691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.678855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.678882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.679015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.679060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.679183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.679230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.679370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.679401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.679573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.679603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.679768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.679814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.679953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.679982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.680172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.680217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.680373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.680418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.680569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.680612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.680749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.680782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.680914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.680940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.681087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.681116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.681235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.681265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.681422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.681451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.681608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.681635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.681768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.681805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.681953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.681982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.682155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.682184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.682364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.682394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.682561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.682606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.682762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.682801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.682982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.683021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.683284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.683313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.683456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.683486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.683597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.683628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.048 [2024-07-11 21:41:03.683824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.048 [2024-07-11 21:41:03.683863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.048 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.684023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.684055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.684302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.684355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.684504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.684575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.684770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.684814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.684926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.684953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.685081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.685108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.685267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.685297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.685513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.685544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.685714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.685744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.685886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.685913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.686068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.686097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.686300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.686351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.686503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.686538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.686696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.686723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.686875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.686902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.687008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.687053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.687268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.687297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.687523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.687552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.687722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.687750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.687891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.687918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.688103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.688134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.688280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.688311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.688487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.688550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.688684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.688711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.688821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.688849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.688952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.688979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.689160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.689189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.689359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.689389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.689523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.689568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.689726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.689758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.689897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.689924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.690101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.690131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.690299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.690328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.690487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.690516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.690642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.690670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.690796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.690824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.690962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.690989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.691150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.691180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.691444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.691495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.691607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.691657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.691817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.049 [2024-07-11 21:41:03.691845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.049 qpair failed and we were unable to recover it. 00:34:29.049 [2024-07-11 21:41:03.691978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.692013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.692174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.692205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.692377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.692408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.692554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.692583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.692698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.692728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.692893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.692920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.693087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.693132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.693303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.693333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.693568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.693598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.693727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.693760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.693902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.693930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.694079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.694108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.694293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.694324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.694473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.694503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.694647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.694677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.694866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.694893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.695023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.695050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.695168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.695195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.695318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.695346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.695521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.695551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.695685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.695712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.695880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.695907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.696051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.696094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.696220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.696248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.696364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.696390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.696557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.696591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.696767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.696807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.696967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.696994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.697119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.697149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.697281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.697309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.697439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.697467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.697575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.697603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.697711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.697737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.697852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.697878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.698006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.698033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.698143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.698171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.698314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.698357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.698460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.698490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.698667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.698693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.698846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.698885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.699022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.050 [2024-07-11 21:41:03.699050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.050 qpair failed and we were unable to recover it. 00:34:29.050 [2024-07-11 21:41:03.699149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.699176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.699315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.699341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.699500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.699526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.699654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.699681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.699788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.699818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.699956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.699984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.700116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.700271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.700401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.700555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.700683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.700848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.700978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.701005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.701116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.701142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.701258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.701287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.701448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.701475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.701650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.701681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.701866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.701893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.702050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.702077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.702232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.702259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.702368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.702396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.702494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.702521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.702652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.702680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.702873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.702901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.703042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.703223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.703389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.703555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.703696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.703847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.703980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.704131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.704324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.704485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.704609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.704767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.704950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.704977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.705078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.705123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.051 qpair failed and we were unable to recover it. 00:34:29.051 [2024-07-11 21:41:03.705241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.051 [2024-07-11 21:41:03.705273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.705423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.705449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.705598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.705638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.705783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.705813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.705973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.705999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.706125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.706152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.706252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.706279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.706408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.706435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.706558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.706585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.706719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.706746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.706913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.706940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.707076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.707103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.707249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.707292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.707445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.707475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.707619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.707647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.707760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.707788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.707944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.707970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.708100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.708126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.708258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.708285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.708440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.708467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.708602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.708648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.708795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.708839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.709003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.709030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.709179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.709208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.709352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.709381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.709555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.709582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.709743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.709778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.709911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.709941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.710076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.710104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.710236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.710264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.710448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.710477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.710639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.710666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.710831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.710860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.710994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.711166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.711308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.711467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.711624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.711783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.711971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.052 [2024-07-11 21:41:03.711998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.052 qpair failed and we were unable to recover it. 00:34:29.052 [2024-07-11 21:41:03.712134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.712160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.712339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.712368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.712487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.712516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.712669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.712695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.712855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.712883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.713016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.713043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.713178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.713204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.713362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.713389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.713585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.713613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.713743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.713775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.713909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.713935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.714044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.714070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.714214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.714241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.714343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.714386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.714500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.714534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.714682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.714709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.714861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.714901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.715083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.715114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.715288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.715315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.715464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.715493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.715664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.715694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.715874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.715900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.716039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.716065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.716171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.716198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.716353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.716379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.716510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.716539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.716674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.716701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.716882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.716909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.717048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.717075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.717221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.717250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.717393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.717419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.717553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.717581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.717705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.717732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.717908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.717948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.718106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.718138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.718289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.718319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.718465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.718494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.718659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.718689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.718845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.718872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.718974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.719003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.719102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.719129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.719266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.719309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.719469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.719495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.719675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.053 [2024-07-11 21:41:03.719704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.053 qpair failed and we were unable to recover it. 00:34:29.053 [2024-07-11 21:41:03.719887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.719915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.720122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.720183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.720418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.720471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.720639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.720669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.720783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.720830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.720955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.720982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.721168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.721197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.721338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.721402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.721571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.721601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.721749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.721800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.721910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.721937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.722101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.722128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.722254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.722283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.722455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.722484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.722662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.722692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.722841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.722868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.723021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.723048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.723197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.723226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.723366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.723396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.723569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.723599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.723765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.723806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.723972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.724001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.724126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.724174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.724343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.724371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.724527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.724577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.724708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.724735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.724878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.724905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.725933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.725959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.726126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.726155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.726378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.726407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.726575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.726604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.726828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.726864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.727007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.727035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.727215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.727245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.727503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.727554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.727735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.727769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.727901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.727928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.728109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.728139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.054 qpair failed and we were unable to recover it. 00:34:29.054 [2024-07-11 21:41:03.728403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.054 [2024-07-11 21:41:03.728457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.728598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.728628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.728768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.728795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.728947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.728974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.729122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.729152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.729358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.729411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.729618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.729647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.729810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.729841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.729975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.730002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.730208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.730261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.730382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.730412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.730579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.730609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.730760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.730788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.730925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.730952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.731139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.731165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.731452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.731503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.731657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.731687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.731842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.731869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.732025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.732052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.732173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.732202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.732351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.732381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.732532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.732562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.732716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.732742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.732879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.732906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.733045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.733072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.733228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.733257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.733400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.733429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.733559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.733603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.733745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.733798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.733909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.733935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.734062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.734091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.734266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.734295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.734429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.734459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.734623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.734664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.734784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.734820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.734955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.734982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.735137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.735182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.735309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.735353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.735482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.735531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.735630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.735657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.735789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.735817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.735963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.736003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.055 qpair failed and we were unable to recover it. 00:34:29.055 [2024-07-11 21:41:03.736167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.055 [2024-07-11 21:41:03.736194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.736304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.736332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.736464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.736491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.736655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.736682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.736817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.736845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.736969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.737015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.737170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.737213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.737395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.737438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.737543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.737571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.737702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.737729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.737896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.737945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.738091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.738121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.738384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.738435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.738564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.738591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.738724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.738751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.738891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.738918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.739945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.739971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.740074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.740100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.740252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.740283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.740429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.740459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.740573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.740602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.740775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.740805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.740918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.056 [2024-07-11 21:41:03.740963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.056 qpair failed and we were unable to recover it. 00:34:29.056 [2024-07-11 21:41:03.741118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.741145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.741311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.741339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.741477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.741504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.741638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.741666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.741808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.741836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.741970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.741997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.742099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.742125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.742252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.742279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.742409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.742436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.742594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.742621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.742785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.742813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.742926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.742952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.743104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.743131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.743311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.743341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.743465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.743507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.743694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.743738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.743949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.743994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.744150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.744193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.744474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.744524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.744656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.744683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.744811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.744838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.744966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.744993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.745127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.745154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.745274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.745304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.745424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.745452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.745584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.745611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.745720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.745746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.745913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.745942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.746108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.746137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.746300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.746331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.746501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.746552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.746688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.746715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.746846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.746890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.747034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.747077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.057 [2024-07-11 21:41:03.747277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.057 [2024-07-11 21:41:03.747335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.057 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.747485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.747515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.747644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.747671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.747851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.747897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.748036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.748080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.748207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.748250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.748367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.748394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.748527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.748553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.748665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.748692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.748820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.748865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.749007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.749052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.749204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.749248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.749376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.749403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.749556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.749583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.749699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.749726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.749900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.749944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.750096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.750127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.750254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.750297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.750422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.750449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.750589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.750617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.750722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.750773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.750934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.750965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.751133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.751163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.751309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.751374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.751523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.751550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.751666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.751694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.751838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.751867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.752015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.752045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.752217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.752246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.752373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.752403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.752529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.752559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.752704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.752759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.752890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.752919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.753050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.753097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.753227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.753272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.753426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.753471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.753606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.753637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.753757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.753785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.753930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.058 [2024-07-11 21:41:03.753975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.058 qpair failed and we were unable to recover it. 00:34:29.058 [2024-07-11 21:41:03.754164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.059 [2024-07-11 21:41:03.754194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.059 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.754365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.754408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.754513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.754544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.754655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.754682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.754828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.754859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.754977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.755006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.755227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.755282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.755413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.755445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.755559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.755589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.755719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.339 [2024-07-11 21:41:03.755747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.339 qpair failed and we were unable to recover it. 00:34:29.339 [2024-07-11 21:41:03.755895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.755923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.756090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.756136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.756272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.756301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.756412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.756442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.756605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.756637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.756743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.756779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.756911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.756939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.757967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.757994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.758142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.758181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.758357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.758388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.758511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.758544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.758689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.758721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.758864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.758894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.759002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.759029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.759200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.759231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.759379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.759411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.759530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.759560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.759725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.759776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.759924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.759964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.760125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.760173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.760300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.760374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.760521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.760548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.760663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.760690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.760795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.760823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.760925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.760951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.761933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.761960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.762061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.762089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.762191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.762218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.762348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.762375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.762494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.762527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.762636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.340 [2024-07-11 21:41:03.762665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.340 qpair failed and we were unable to recover it. 00:34:29.340 [2024-07-11 21:41:03.762802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.762829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.762955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.763006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.763140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.763167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.763322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.763367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.763472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.763499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.763630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.763657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.763779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.763807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.763964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.764164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.764318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.764471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.764607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.764745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.764909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.764957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.765967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.765995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.766097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.766124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.766236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.766263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.766398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.766426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.766529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.766557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.766703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.766730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.766872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.766900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.767874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.767902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.768885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.768912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.769013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.769040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.769143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.769170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.341 [2024-07-11 21:41:03.769297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.341 [2024-07-11 21:41:03.769323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.341 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.769424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.769450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.769560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.769587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.769689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.769716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.769828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.769857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.769974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.770001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.770145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.770175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.770327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.770357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.770504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.770534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.770665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.770693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.770821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.770849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.770980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.771140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.771289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.771491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.771642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.771803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.771936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.771963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.772125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.772152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.772291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.772335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.772458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.772488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.772630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.772660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.772803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.772851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.772961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.772988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.773114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.773144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.773264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.773294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.773438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.773468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.773606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.773647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.773794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.773834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.773942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.773970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.774098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.774128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.774268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.774298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.774439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.774469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.774604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.774632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.774801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.774829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.774984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.775010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.775172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.775230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.775513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.775566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.775685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.775715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.775906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.775933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.776071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.776101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.776247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.776277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.776505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.776565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.776692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.342 [2024-07-11 21:41:03.776719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.342 qpair failed and we were unable to recover it. 00:34:29.342 [2024-07-11 21:41:03.776831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.776859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.776973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.777000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.777221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.777251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.777364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.777394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.777537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.777567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.777724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.777797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.777922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.777951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.778085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.778113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.778254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.778299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.778450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.778495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.778647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.778687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.778826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.778854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.778966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.778993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.779137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.779167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.779297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.779325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.779478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.779508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.779675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.779706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.779847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.779875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.780019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.780064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.780229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.780259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.780398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.780428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.780567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.780596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.780783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.780824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.780940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.780968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.781129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.781155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.781283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.781313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.781469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.781497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.781636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.781663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.781798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.781827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.781969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.781995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.782127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.782154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.782281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.782311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.782428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.782462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.782608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.782639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.782770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.782797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.782907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.343 [2024-07-11 21:41:03.782934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.343 qpair failed and we were unable to recover it. 00:34:29.343 [2024-07-11 21:41:03.783046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.783072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.783196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.783226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.783342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.783369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.783551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.783581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.783699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.783726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.783873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.783900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.784005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.784050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.784251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.784281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.784398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.784428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.784576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.784605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.784744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.784779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.784906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.784933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.785098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.785125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.785257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.785286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.785470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.785500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.785616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.785645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.785792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.785820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.785984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.786011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.786167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.786193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.786355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.786384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.786500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.786530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.786647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.786676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.786861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.786889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.786997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.787046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.787185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.787215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.787360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.787390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.787523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.787567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.787698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.787729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.787855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.787882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.787993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.788019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.788172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.788201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.788360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.788389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.788541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.788572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.788717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.788746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.788912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.788939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.789044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.789071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.789173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.789200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.789345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.789375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.789534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.789561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.789699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.789729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.789885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.789925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.790063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.790091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.790213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.790258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.790403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.790433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.344 [2024-07-11 21:41:03.790580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.344 [2024-07-11 21:41:03.790610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.344 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.790760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.790806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.790942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.790969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.791132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.791186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.791330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.791360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.791478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.791511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.791669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.791701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.791817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.791844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.791965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.792009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.792191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.792220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.792395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.792424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.792548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.792576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.792703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.792731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.792890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.792930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.793069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.793100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.793325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.793382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.793590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.793617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.793747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.793783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.793915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.793942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.794100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.794127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.794339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.794394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.794568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.794595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.794725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.794763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.794897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.794924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.795028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.795054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.795209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.795235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.795379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.795440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.795589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.795620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.795793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.795820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.795931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.795960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.796076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.796105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.796249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.796278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.796436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.796484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.796624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.796652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.796796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.796824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.796958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.796985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.797119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.797146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.797298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.797359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.797495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.797524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.797631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.797658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.797795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.797822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.797931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.797958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.798064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.798092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.798225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.798252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.345 qpair failed and we were unable to recover it. 00:34:29.345 [2024-07-11 21:41:03.798406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.345 [2024-07-11 21:41:03.798452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.798596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.798623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.798733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.798770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.798895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.798925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.799043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.799071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.799197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.799243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.799501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.799554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.799688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.799715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.799877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.799920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.800045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.800076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.800194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.800224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.800332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.800362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.800519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.800548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.800709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.800763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.800933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.800981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.801141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.801186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.801340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.801392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.801522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.801549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.801681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.801709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.801880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.801909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.802968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.802997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.803146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.803175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.803304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.803332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.803446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.803473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.803597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.803623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.803738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.803773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.803894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.803940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.804127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.804157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.804309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.804336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.804494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.804521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.804639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.804666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.804774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.804801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.804936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.804962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.805090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.805122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.805285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.805312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.805447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.805474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.805583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.805616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.805762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.346 [2024-07-11 21:41:03.805810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.346 qpair failed and we were unable to recover it. 00:34:29.346 [2024-07-11 21:41:03.805933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.805962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.806130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.806160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.806297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.806326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.806519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.806567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.806679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.806708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.806878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.806909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.807043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.807086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.807272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.807303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.807413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.807443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.807554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.807584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.807732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.807767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.807963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.807993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.808125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.808154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.808276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.808304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.808418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.808446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.808619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.808648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.808775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.808820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.808963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.808993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.809177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.809207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.809358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.809388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.809546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.809591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.809714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.809741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.809880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.809906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.810040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.810197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.810343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.810476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.810669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.810840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.810979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.811007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.811145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.811174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.811319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.811349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.811510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.811537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.811720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.811770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.811885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.811929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.812050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.812079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.812220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.812272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.347 [2024-07-11 21:41:03.812451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.347 [2024-07-11 21:41:03.812504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.347 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.812675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.812701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.812828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.812856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.812969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.812997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.813156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.813187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.813379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.813439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.813592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.813634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.813790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.813818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.813948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.813975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.814082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.814110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.814231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.814263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.814396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.814441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.814577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.814604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.814763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.814808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.814938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.814983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.815112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.815151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.815292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.815320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.815431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.815458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.815583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.815610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.815719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.815745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.815944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.815975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.816158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.816212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.816452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.816504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.816618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.816649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.816780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.816810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.816960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.817006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.817166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.817196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.817338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.817384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.817493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.817525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.817682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.817709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.817875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.817922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.818043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.818087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.818225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.818251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.818383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.818409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.818514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.818542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.818644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.818672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.818836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.818868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.819018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.819048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.819167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.819198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.819308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.819337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.819535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.819591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.819726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.819765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.819935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.819963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.820115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.820160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.820315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.348 [2024-07-11 21:41:03.820345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.348 qpair failed and we were unable to recover it. 00:34:29.348 [2024-07-11 21:41:03.820468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.820495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.820610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.820638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.820771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.820800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.820931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.820975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.821163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.821209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.821348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.821375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.821513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.821540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.821653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.821681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.821856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.821896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.822019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.822047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.822156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.822183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.822309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.822339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.822535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.822562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.822720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.822746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.822911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.822941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.823059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.823088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.823268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.823297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.823438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.823491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.823653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.823680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.823822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.823849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.823983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.824032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.824195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.824239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.824389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.824418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.824544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.824572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.824712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.824739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.824885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.824912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.825043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.825070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.825196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.825227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.825397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.825441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.825550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.825581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.825717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.825744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.825880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.825910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.826132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.826187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.826307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.826349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.826496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.826526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.826673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.826700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.826811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.826838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.826987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.827017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.827130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.827160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.827288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.827351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.827509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.827552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.827718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.827747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.827867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.827894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.828016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.828045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.349 [2024-07-11 21:41:03.828215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.349 [2024-07-11 21:41:03.828260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.349 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.828388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.828433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.828545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.828571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.828677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.828704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.828874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.828903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.829949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.829977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.830130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.830160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.830347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.830400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.830546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.830576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.830711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.830739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.830893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.830933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.831094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.831126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.831237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.831267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.831412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.831442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.831596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.831626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.831782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.831827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.831929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.831958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.832099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.832142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.832291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.832351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.832482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.832528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.832686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.832713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.832838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.832865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.832965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.832992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.833099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.833126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.833281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.833311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.833461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.833491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.833642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.833668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.833784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.833819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.833981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.834008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.834130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.834160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.834305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.834347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.834531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.834561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.834728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.834761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.834879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.834905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.835015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.835043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.835141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.835168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.835323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.835353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.835525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.835555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.835663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.350 [2024-07-11 21:41:03.835693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.350 qpair failed and we were unable to recover it. 00:34:29.350 [2024-07-11 21:41:03.835843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.835870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.836005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.836032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.836145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.836189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.836367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.836396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.836609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.836639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.836795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.836822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.836954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.836981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.837109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.837136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.837265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.837309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.837450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.837479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.837645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.837675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.837816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.837844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.837954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.837982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.838168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.838223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.838370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.838400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.838523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.838552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.838685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.838713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.838816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.838843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.838953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.838980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.839105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.839134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.839275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.839305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.839445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.839493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.839641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.839668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.839796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.839823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.839955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.839981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.840131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.840161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.840303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.840333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.840478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.840508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.840678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.840719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.840878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.840918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.841047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.841079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.841251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.841283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.841393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.841423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.841590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.841634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.841763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.841809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.841915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.841942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.842054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.842081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.351 qpair failed and we were unable to recover it. 00:34:29.351 [2024-07-11 21:41:03.842212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.351 [2024-07-11 21:41:03.842243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.842428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.842486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.842624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.842651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.842778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.842819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.842934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.842963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.843093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.843120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.843260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.843287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.843452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.843515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.843657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.843685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.843818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.843846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.843957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.843984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.844122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.844151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.844274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.844318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.844467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.844498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.844652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.844678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.844812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.844839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.844940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.844967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.845070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.845096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.845243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.845273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.845419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.845452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.845581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.845611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.845787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.845828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.845940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.845968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.846091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.846136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.846267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.846294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.846439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.846470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.846609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.846639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.846771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.846801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.846963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.846990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.847128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.847155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.847260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.847287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.847448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.847475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.847577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.847608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.847765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.847792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.847928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.847958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.848064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.848093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.848222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.848266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.848450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.848496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.848636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.848676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.848856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.848897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.849053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.849085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.849401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.849452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.849704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.849758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.849909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.849937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.352 qpair failed and we were unable to recover it. 00:34:29.352 [2024-07-11 21:41:03.850087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.352 [2024-07-11 21:41:03.850119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.850266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.850344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.850478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.850524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.850679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.850706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.850874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.850908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.851055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.851085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.851231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.851262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.851409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.851438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.851554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.851584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.851758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.851788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.851958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.851987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.852126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.852156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.852301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.852331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.852470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.852501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.852648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.852678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.852834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.852866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.853029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.853076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.853208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.853257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.853406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.853453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.853611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.853638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.853777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.853805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.853931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.853977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.854126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.854169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.854375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.854428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.854593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.854620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.854726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.854761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.854909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.854953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.855143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.855173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.855425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.855473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.855607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.855635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.855738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.855786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.855940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.855971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.856119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.856149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.856291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.856320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.856469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.856498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.856620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.856648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.856821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.856851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.856997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.857027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.857171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.857200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.857373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.857418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.857577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.857605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.857840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.857868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.858019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.858063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.858250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.858277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.353 [2024-07-11 21:41:03.858434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.353 [2024-07-11 21:41:03.858462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.353 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.858594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.858621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.858724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.858751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.858918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.858945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.859076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.859104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.859251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.859280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.859551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.859600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.859774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.859801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.859924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.859954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.860122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.860151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.860266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.860297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.860440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.860475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.860662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.860707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.860852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.860881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.861052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.861097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.861296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.861328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.861569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.861620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.861797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.861825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.861937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.861965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.862068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.862096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.862215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.862245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.862394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.862425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.862561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.862592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.862715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.862744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.862931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.862961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.863104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.863134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.863350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.863410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.863556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.863586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.863732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.863768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.863937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.863983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.864142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.864185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.864303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.864334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.864471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.864498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.864626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.864654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.864793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.864822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.865002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.865032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.865236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.865263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.865423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.865450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.865589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.865616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.865740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.865804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.865963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.865994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.866108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.866138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.866311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.866340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.866448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.354 [2024-07-11 21:41:03.866478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.354 qpair failed and we were unable to recover it. 00:34:29.354 [2024-07-11 21:41:03.866626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.866655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.866811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.866839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.866989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.867038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.867169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.867212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.867397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.867454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.867563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.867590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.867720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.867747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.867883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.867932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.868090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.868134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.868315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.868364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.868499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.868526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.868656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.868683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.868852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.868897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.869034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.869062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.869212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.869256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.869394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.869421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.869524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.869550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.869657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.869685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.869852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.869897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.870069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.870194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.870392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.870555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.870683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.870870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.870983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.871011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.871169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.871196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.871332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.871359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.871488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.871515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.871653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.871680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.871857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.871888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.872014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.872044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.872168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.872198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.872376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.872421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.872562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.872590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.872701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.872728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.872888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.355 [2024-07-11 21:41:03.872934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.355 qpair failed and we were unable to recover it. 00:34:29.355 [2024-07-11 21:41:03.873086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.873129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.873276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.873320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.873471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.873515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.873640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.873667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.873775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.873802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.873925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.873971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.874146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.874191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.874434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.874483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.874714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.874741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.874882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.874908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.875044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.875075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.875225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.875255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.875424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.875454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.875631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.875658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.875798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.875828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.875947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.875977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.876148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.876179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.876312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.876343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.876532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.876580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.876715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.876743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.876894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.876922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.877057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.877084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.877212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.877239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.877420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.877469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.877639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.877667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.877781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.877809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.877971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.877998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.878197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.878227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.878374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.878404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.878572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.878603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.878757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.878786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.878945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.878990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.879169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.879214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.879382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.879431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.879563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.879590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.879720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.879747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.879865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.879892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.880049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.880093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.880248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.880291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.880409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.880497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.880631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.880659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.880794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.880821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.880927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.880954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.881090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.356 [2024-07-11 21:41:03.881117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.356 qpair failed and we were unable to recover it. 00:34:29.356 [2024-07-11 21:41:03.881271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.881298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.881406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.881432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.881539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.881565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.881700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.881726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.881862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.881889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.882043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.882070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.882200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.882231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.882337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.882365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.882497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.882524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.882670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.882710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.882867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.882899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.883052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.883084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.883242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.883271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.883407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.883433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.883563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.883589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.883719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.883747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.883872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.883899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.884046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.884076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.884249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.884278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.884415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.884444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.884617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.884647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.884772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.884800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.884956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.884983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.885128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.885158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.885320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.885349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.885452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.885481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.885629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.885659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.885815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.885842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.885998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.886025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.886157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.886184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.886370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.886399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.886545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.886574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.886727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.886781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.886933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.886974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.887153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.887223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.887442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.887470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.887606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.887633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.887833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.887874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.887988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.888016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.888162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.888192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.888352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.888379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.888660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.888709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.888877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.888904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.889127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.889187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.357 qpair failed and we were unable to recover it. 00:34:29.357 [2024-07-11 21:41:03.889404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.357 [2024-07-11 21:41:03.889455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.889627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.889657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.889861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.889914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.890105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.890150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.890332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.890377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.890510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.890538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.890648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.890677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.890809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.890837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.890975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.891154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.891357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.891491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.891651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.891790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.891957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.891986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.892186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.892243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.892449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.892505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.892674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.892704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.892872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.892901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.893052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.893098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.893355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.893408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.893563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.893591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.893723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.893750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.893895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.893922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.894079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.894107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.894242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.894270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.894425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.894452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.894590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.894618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.894763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.894790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.894924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.894953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.895112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.895139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.895271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.895298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.895421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.895447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.895583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.895611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.895725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.895758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.895917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.895963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.896078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.896145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.896430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.896483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.896640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.896667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.896818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.896864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.897054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.897084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.897366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.897415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.897575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.897606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.897713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.897741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.358 [2024-07-11 21:41:03.897887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.358 [2024-07-11 21:41:03.897914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.358 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.898073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.898103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.898269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.898299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.898470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.898499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.898615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.898644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.898800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.898827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.898979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.899008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.899178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.899208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.899351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.899381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.899569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.899615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.899732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.899778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.899936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.899967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.900116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.900147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.900292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.900324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.900466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.900496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.900678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.900705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.900833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.900860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.900966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.900994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.901154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.901182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.901305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.901337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.901506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.901536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.901664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.901709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.901853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.901882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.902027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.902071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.902222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.902266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.902438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.902503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.902655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.902683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.902818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.902847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.903027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.903057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.903178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.903209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.903356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.903386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.903561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.903593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.903716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.903744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.903914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.903941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.904089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.904118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.904268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.904314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.904457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.904501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.904634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.359 [2024-07-11 21:41:03.904661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.359 qpair failed and we were unable to recover it. 00:34:29.359 [2024-07-11 21:41:03.904837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.904887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.905005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.905035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.905187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.905215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.905345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.905371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.905508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.905535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.905667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.905693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.905843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.905887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.906039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.906069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.906237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.906280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.906390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.906418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.906560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.906587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.906690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.906718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.906849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.906895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.907020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.907065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.907195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.907222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.907323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.907350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.907508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.907535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.907690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.907716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.907874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.907904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.908043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.908073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.908233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.908277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.908412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.908441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.908600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.908627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.908735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.908774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.908934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.908961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.909116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.909143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.909288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.909332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.909465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.909492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.909628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.909655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.909800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.909828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.909990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.910209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.910354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.910515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.910646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.910775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.910937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.910965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.911113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.911157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.911288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.911315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.911447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.911474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.911582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.911613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.911722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.911749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.911872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.911902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.912084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.912113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.360 qpair failed and we were unable to recover it. 00:34:29.360 [2024-07-11 21:41:03.912252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.360 [2024-07-11 21:41:03.912292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.912433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.912461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.912594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.912622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.912751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.912789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.912929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.912956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.913062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.913089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.913263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.913293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.913435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.913465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.913606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.913636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.913791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.913820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.914007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.914053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.914201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.914244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.914425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.914470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.914600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.914627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.914766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.914794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.914937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.914981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.915154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.915227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.915387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.915414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.915542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.915569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.915718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.915764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.915921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.915966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.916085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.916113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.916344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.916396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.916544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.916574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.916748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.916784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.916957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.916987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.917104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.917134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.917275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.917305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.917477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.917507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.917631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.917658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.917802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.917843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.918007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.918039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.918182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.918212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.918361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.918391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.918594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.918627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.918761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.918790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.918920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.918967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.919144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.919192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.919372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.919416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.919575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.919601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.919733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.919765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.919904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.919930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.920034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.920060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.920185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.361 [2024-07-11 21:41:03.920212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.361 qpair failed and we were unable to recover it. 00:34:29.361 [2024-07-11 21:41:03.920380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.920407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.920567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.920594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.920778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.920806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.920944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.920971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.921122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.921167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.921335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.921397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.921537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.921564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.921732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.921767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.921934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.921967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.922140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.922170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.922339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.922368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.922539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.922569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.922741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.922780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.922908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.922935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.923095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.923125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.923305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.923334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.923446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.923475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.923616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.923645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.923808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.923835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.923953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.923992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.924179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.924211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.924384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.924415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.924648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.924701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.924885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.924914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.925052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.925079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.925234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.925278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.925413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.925476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.925580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.925608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.925769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.925797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.925954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.925999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.926117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.926162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.926316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.926361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.926522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.926548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.926682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.926709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.926860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.926905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.927101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.927162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.927311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.927341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.927488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.927518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.927668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.927697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.927834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.927862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.927994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.928022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.928149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.928180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.928314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.928345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.928463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.928493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.362 [2024-07-11 21:41:03.928668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.362 [2024-07-11 21:41:03.928698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.362 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.928861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.928889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.928998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.929043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.929254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.929284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.929430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.929461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.929599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.929626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.929730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.929765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.929906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.929933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.930089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.930115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.930298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.930341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.930498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.930529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.930646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.930673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.930819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.930847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.930996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.931026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.931197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.931228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.931361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.931397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.931540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.931571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.931739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.931776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.931958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.931986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.932120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.932148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.932258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.932285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.932431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.932480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.932590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.932618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.932764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.932792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.932944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.932993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.933147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.933190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.933340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.933428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.933587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.933614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.933715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.933761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.933948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.933978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.934128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.934158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.934305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.934348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.934505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.934531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.934686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.934712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.934877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.934922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.935061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.935104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.935260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.935291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.935435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.363 [2024-07-11 21:41:03.935465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.363 qpair failed and we were unable to recover it. 00:34:29.363 [2024-07-11 21:41:03.935610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.935637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.935799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.935827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.935980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.936010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.936192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.936221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.936395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.936425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.936567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.936598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.936744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.936779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.936909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.936935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.937048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.937076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.937204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.937248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.937426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.937469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.937605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.937632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.937767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.937795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.937957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.938002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.938142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.938186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.938363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.938395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.938567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.938597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.938744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.938800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.938931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.938957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.939093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.939120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.939271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.939298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.939427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.939469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.939616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.939646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.939800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.939828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.939936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.939963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.940073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.940099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.940248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.940277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.940413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.940443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.940575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.940603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.940769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.940797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.940919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.940945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.941056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.941083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.941236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.941265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.941436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.941465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.941620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.941665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.941830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.941860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.942012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.942042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.942234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.942286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.942391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.942421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.942536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.942566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.942716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.942743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.942861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.942889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.943053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.943081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.943217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.364 [2024-07-11 21:41:03.943244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.364 qpair failed and we were unable to recover it. 00:34:29.364 [2024-07-11 21:41:03.943361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.943388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.943546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.943576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.943722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.943761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.943914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.943942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.944061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.944102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.944212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.944242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.944426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.944471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.944605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.944632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.944741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.944787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.944950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.944981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.945136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.945165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.945283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.945313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.945456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.945486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.945610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.945646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.945827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.945855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.945987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.946014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.946122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.946149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.946299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.946330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.946475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.946505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.946653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.946684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.946835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.946864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.947026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.947075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.947255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.947300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.947503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.947565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.947694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.947721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.947910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.947954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.948173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.948226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.948405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.948436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.948587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.948618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.948774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.948820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.948979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.949005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.949221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.949250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.949397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.949427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.949536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.949565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.949737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.949775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.949884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.949911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.950064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.950093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.950262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.950292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.950461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.950491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.950603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.950633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.950808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.950849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.950994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.951023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.951158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.951186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.951347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.365 [2024-07-11 21:41:03.951373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.365 qpair failed and we were unable to recover it. 00:34:29.365 [2024-07-11 21:41:03.951524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.951556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.951701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.951730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.951903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.951931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.952032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.952069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.952193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.952222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.952352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.952378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.952528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.952557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.952725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.952760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.952915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.952942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.953124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.953158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.953297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.953327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.953474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.953503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.953655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.953681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.953800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.953827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.953959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.953985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.954108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.954138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.954304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.954334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.954478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.954508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.954696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.954727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.954864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.954891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.955062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.955090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.955286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.955338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.955501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.955545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.955687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.955714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.955856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.955884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.956074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.956117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.956264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.956308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.956483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.956553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.956677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.956703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.956880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.956925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.957072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.957103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.957249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.957280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.957428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.957457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.957721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.957787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.957943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.957971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.958124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.958154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.958314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.958344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.958458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.958489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.958669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.958696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.958832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.958860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.958991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.959018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.959135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.959167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.959322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.959380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.959528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.366 [2024-07-11 21:41:03.959558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.366 qpair failed and we were unable to recover it. 00:34:29.366 [2024-07-11 21:41:03.959738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.959770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.959905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.959933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.960095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.960123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.960304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.960351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.960503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.960555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.960715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.960746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.960933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.960982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.961263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.961316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.961464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.961506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.961637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.961664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.961830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.961874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.962018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.962048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.962193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.962223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.962412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.962442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.962558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.962588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.962697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.962726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.962880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.962907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.963063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.963089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.963243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.963272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.963444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.963474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.963620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.963651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.963816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.963843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.963971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.963998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.964172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.964213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.964397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.964426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.964596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.964625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.964737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.964772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.964946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.964972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.965103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.965129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.965225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.965251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.965394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.965427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.965572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.965602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.965771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.965799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.965932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.965960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.966091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.966122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.966275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.966305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.966477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.966508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.367 [2024-07-11 21:41:03.966668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.367 [2024-07-11 21:41:03.966707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.367 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.966848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.966877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.967064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.967246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.967374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.967532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.967660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.967820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.967987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.968019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.968162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.968190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.968322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.968349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.968476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.968503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.968649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.968689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.968891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.968923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.969048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.969081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.969227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.969257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.969428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.969458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.969628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.969659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.969810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.969838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.970024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.970084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.970212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.970257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.970435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.970479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.970619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.970647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.970828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.970874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.971961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.971987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.972100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.972126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.972261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.972287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.972391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.972418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.972553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.972580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.972723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.972764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.972872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.972900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.973037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.973068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.973201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.973244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.973378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.973404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.973562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.973588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.973769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.973830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.973975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.974006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.974151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.974180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.368 qpair failed and we were unable to recover it. 00:34:29.368 [2024-07-11 21:41:03.974358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.368 [2024-07-11 21:41:03.974388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.974538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.974568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.974696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.974722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.974885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.974912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.975054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.975089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.975205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.975232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.975402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.975432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.975600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.975629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.975740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.975970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.975999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.976122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.976165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.976329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.976359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.976477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.976508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.976630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.976660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.976826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.976854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.977032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.977063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.977242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.977268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.977379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.977406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.977546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.977574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.977728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.977773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.977933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.977964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.978140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.978169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.978350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.978379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.978521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.978550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.978722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.978771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.978942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.978971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.979111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.979140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.979258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.979287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.979460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.979510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.979649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.979676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.979828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.979873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.980059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.980090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.980231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.980260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.980396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.980425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.980573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.980599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.980768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.980795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.980904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.980930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.981085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.981121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.981265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.981295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.981416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.981446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.981568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.981597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.981733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.981779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.981934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.981978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.982161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.982205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.369 qpair failed and we were unable to recover it. 00:34:29.369 [2024-07-11 21:41:03.982362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.369 [2024-07-11 21:41:03.982410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.982569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.982596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.982730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.982775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.982937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.982967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.983163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.983208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.983422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.983476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.983630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.983657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.983819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.983850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.984016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.984056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.984198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.984227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.984397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.984426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.984598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.984628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.984768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.984813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.984910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.984937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.985090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.985132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.985272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.985302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.985442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.985471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.985628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.985657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.985790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.985817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.985999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.986029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.986291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.986334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.986487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.986532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.986667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.986694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.986880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.986925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.987083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.987127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.987281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.987325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.987524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.987553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.987709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.987739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.987920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.987949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.988060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.988090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.988232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.988262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.988404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.988433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.988583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.988610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.988766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.988793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.988936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.988966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.989127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.989159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.989352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.989401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.989556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.989583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.989713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.989741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.989908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.989953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.990088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.990119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.990250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.990277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.990410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.990437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.990568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.990594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.370 [2024-07-11 21:41:03.990726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.370 [2024-07-11 21:41:03.990775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.370 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.990910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.990935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.991066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.991092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.991188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.991231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.991347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.991376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.991514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.991543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.991715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.991760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.991861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.991887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.992064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.992106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.992315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.992345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.992493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.992522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.992663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.992693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.992842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.992870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.992988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.993019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.993202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.993232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.993372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.993403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.993578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.993607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.993724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.993768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.993886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.993913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.994060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.994089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.994256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.994286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.994427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.994456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.994594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.994623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.994749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.994814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.994945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.994976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.995134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.995164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.995370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.995399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.995538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.995568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.995711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.995741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.995910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.995936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.996105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.996163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.996314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.996359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.996543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.996588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.996720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.996771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.996885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.996913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.997066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.997111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.997258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.997308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.371 [2024-07-11 21:41:03.997416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.371 [2024-07-11 21:41:03.997443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.371 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.997550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.997577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.997682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.997709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.997850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.997877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.998007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.998037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.998237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.998281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.998413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.998440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.998575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.998601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.998800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.998828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.998949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.998979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.999120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.999150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.999270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.999297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.999402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.999428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.999567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.999593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.999708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.999762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:03.999879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:03.999908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.000053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.000083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.000229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.000288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.000422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.000452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.000590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.000619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.000805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.000832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.000938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.000965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.001105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.001132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.001287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.001332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.001509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.001538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.001659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.001688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.001831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.001864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.001982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.002012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.002194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.002223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.002348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.002378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.002532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.002562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.002684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.002713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.002872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.002902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.003927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.003954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.004072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.004099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.004270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.004299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.004509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.004539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.004683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.004712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.372 [2024-07-11 21:41:04.004889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.372 [2024-07-11 21:41:04.004935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.372 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.005063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.005109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.005246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.005290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.005431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.005458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.005560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.005588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.005722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.005767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.005925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.005971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.006100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.006145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.006298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.006342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.006503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.006534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.006645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.006672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.006827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.006874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.007013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.007052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.007188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.007232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.007347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.007374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.007489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.007516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.007644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.007672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.007821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.007867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.008023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.008070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.008220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.008265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.008394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.008422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.008554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.008581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.008758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.008786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.008912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.008958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.009106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.009150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.009295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.009322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.009431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.009458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.009623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.009649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.009802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.009832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.009977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.010022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.010219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.010264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.010377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.010405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.010536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.010562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.010691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.010718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.010878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.010923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.011084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.011128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.011287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.011333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.011446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.011472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.011628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.011656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.011766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.011793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.011929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.011956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.012083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.012240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.012403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.012564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.012705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.012871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.012998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.013028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.013214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.013241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.013365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.013402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.013558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.013585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.013725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.013773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.013892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.373 [2024-07-11 21:41:04.013920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.373 qpair failed and we were unable to recover it. 00:34:29.373 [2024-07-11 21:41:04.014024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.014178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.014334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.014498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.014652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.014800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.014957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.014984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.015120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.015150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.015330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.015360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.015481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.015511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.015647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.015676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.015787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.015815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.015979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.016158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.016304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.016475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.016634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.016764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.016896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.016922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.017063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.017089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.017221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.017250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.017368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.017398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.017522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.017552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.017692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.017722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.017896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.017923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.018083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.018113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.018259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.018289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.018460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.018490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.018647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.018676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.018793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.018821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.018976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.019021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.019188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.019237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.019352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.019397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.019556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.019583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.019693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.019720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.019893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.019939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.020050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.020077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.020236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.020266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.020451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.020479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.020641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.020668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.020809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.020837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.020946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.020973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.021139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.021266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.021401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.021559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.021725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.021879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.021986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.022157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.022327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.022482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.022638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.022766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.022903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.022929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.023888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.023914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.024051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.024078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.374 qpair failed and we were unable to recover it. 00:34:29.374 [2024-07-11 21:41:04.024237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.374 [2024-07-11 21:41:04.024267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.024399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.024426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.024529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.024556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.024682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.024709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.024857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.024884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.025042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.025085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.025235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.025278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.025420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.025448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.025585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.025611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.025725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.025758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.025892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.025921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.026063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.026263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.026391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.026516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.026677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.026869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.026979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.027135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.027281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.027450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.027602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.027759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.027911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.027941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.028067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.028097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.028222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.028264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.028371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.028400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.028544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.028579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.028739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.028770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.028901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.028947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.029059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.029089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.029211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.029241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.029386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.029424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.029537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.029568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.029748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.029788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.029934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.029961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.030129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.030157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.030310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.030341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.030488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.030516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.030646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.030674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.030837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.030864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.031031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.031184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.031361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.031517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.031673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.031859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.031993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.032038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.032157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.032184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.032346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.032373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.032504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.032531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.032642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.032668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.032826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.032854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.032988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.033016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.033165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.033205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.033309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.033337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.033476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.033503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.033610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.033647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.375 [2024-07-11 21:41:04.033765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.375 [2024-07-11 21:41:04.033793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.375 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.033939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.033966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.034085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.034113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.034275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.034303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.034452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.034482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.034635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.034662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.034793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.034821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.034934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.034962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.035104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.035134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.035276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.035321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.035438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.035468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.035698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.035727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.035889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.035928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.036087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.036123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.036319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.036354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.036482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.036515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.036670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.036701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.036879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.036920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.037081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.037112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.037291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.037321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.037456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.037502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.037650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.037679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.037789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.037823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.037938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.037983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.038135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.038165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.038334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.038364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.038481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.038507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.038654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.038683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.038876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.038922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.039078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.039125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.039249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.039296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.039452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.039497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.039661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.039688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.039863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.039909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.040056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.040099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.040221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.040266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.040460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.040492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.040631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.040658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.040857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.040902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.041049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.041079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.041243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.041287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.041421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.041448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.041602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.041629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.041738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.041775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.041920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.041965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.042103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.042136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.042253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.042283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.042434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.042463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.042585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.042612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.042751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.042785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.042928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.042954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.043082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.043113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.043239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.043266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.043397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.043426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.043636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.043664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.043780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.043808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.043937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.043964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.044121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.044151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.044273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.044303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.044440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.044470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.044605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.044635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.044808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.044835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.044954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.044982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.045164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.045191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.045335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.376 [2024-07-11 21:41:04.045380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.376 qpair failed and we were unable to recover it. 00:34:29.376 [2024-07-11 21:41:04.045501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.045531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.045650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.045676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.045784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.045812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.045946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.045973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.046106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.046136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.046249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.046279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.046386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.046417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.046598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.046628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.046743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.046783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.046907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.046933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.047092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.047150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.047310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.047361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.047510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.047554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.047692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.047719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.047871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.047919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.048045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.048090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.048229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.048256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.048364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.048391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.048526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.048553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.048713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.048768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.048886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.048914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.049961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.049991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.050164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.050194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.050365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.050394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.050538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.050568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.050689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.050718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.050869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.050896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.051025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.051078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.051257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.051303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.051470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.051515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.051644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.051671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.051778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.051806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.052965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.053003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.053188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.053234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.053361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.053405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.053513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.053540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.053671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.053699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.053835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.053863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.053997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.054168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.054309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.054442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.054576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.054748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.054893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.054921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.055057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.055084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.055199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.055226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.055354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.055381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.055515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.055543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.055671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.055698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.055864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.055910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.056065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.056094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.377 [2024-07-11 21:41:04.056233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.377 [2024-07-11 21:41:04.056263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.377 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.056385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.056412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.056540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.056567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.056669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.056697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.056858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.056905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.057081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.057218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.057386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.057526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.057691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.057821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.057953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.058000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.058192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.058236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.058343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.058370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.058497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.058525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.058657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.058696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.058855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.058883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.059045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.059190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.059358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.059495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.059694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.059851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.059975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.060113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.060270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.060437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.060622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.060751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.060909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.060954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.061108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.061152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.061259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.061294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.061404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.061432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.061568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.061596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.061761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.061789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.061927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.061969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.062115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.062164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.378 [2024-07-11 21:41:04.062278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.378 [2024-07-11 21:41:04.062305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.378 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.062439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.062465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.062571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.062598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.062736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.062778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.062900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.062930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.063084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.063118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.063278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.063323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.063460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.063487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.063591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.063617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.063760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.063788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.063935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.063979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.064150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.064183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.064322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.064358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.064559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.064586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.064691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.064719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.064867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.064895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.065054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.065085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.065231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.065261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.065379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.065412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.065637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.065665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.065805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.065837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.066957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.066984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.067148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.067174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.067290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.067316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.067435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.067461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.067572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.067600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.067727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.067770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.067885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.067910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.068041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.068079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.068218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.068244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.068381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.068407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.068573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.068606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.068734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.068777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.068888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.068914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.069028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.069067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.379 qpair failed and we were unable to recover it. 00:34:29.379 [2024-07-11 21:41:04.069195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.379 [2024-07-11 21:41:04.069221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.069350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.069375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.069481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.069508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.069602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.069628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.069763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.069791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.069912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.069942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.070100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.070135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.070244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.070272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.070427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.070453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.070573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.070600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.070714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.070760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.070882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.070908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.071889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.071915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.072929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.072955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.073087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.073281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.073445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.073603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.073728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.073891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.073997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.074023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.074136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.074162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.074304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.074330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.074489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.074515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.074675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.074702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.074810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.074837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.074993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.075020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.075146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.075172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.380 [2024-07-11 21:41:04.075300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.380 [2024-07-11 21:41:04.075325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.380 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.075428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.075455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.075567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.075593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.075727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.075772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.075878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.075904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.076941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.076968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.077099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.077157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.077261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.077288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.077416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.077444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.077584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.077618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.077759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.077786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.077942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.077970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.078131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.078175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.078315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.078345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.078454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.078481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.078616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.078642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.078764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.078796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.078904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.078931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.079937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.079964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.080072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.080098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.080233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.080260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.080369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.080396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.080539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.080579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.080745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.080786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.080929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.080956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.081112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.081141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.081292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.081322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.381 qpair failed and we were unable to recover it. 00:34:29.381 [2024-07-11 21:41:04.081483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.381 [2024-07-11 21:41:04.081513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.081658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.081686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.081824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.081851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.081963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.081989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.082131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.082158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.082306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.082346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.082486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.082514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.082650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.082677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.082794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.082823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.082927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.082954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.083049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.083081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.083216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.083244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.083398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.083441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.083575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.083601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.083735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.083778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.083899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.083925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.084049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.084098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.084215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.084261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.084401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.084427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.084596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.084623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.084761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.084788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.084895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.084921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.085018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.085056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.085243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.085287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.085453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.085481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.085616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.085648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.086438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.086469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.086639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.086667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.086797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.086824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.086982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.087009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.382 [2024-07-11 21:41:04.087170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.382 [2024-07-11 21:41:04.087209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.382 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.087346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.087374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.087505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.087531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.087640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.087668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.087835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.087863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.087965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.087991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.088131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.088158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.088287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.088313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.088477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.088503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.088660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.088686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.089633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.089665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.089827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.089858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.089982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.090146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.090277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.090430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.090608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.090763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.090889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.090916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.091054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.091189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.091338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.091474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.091659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.091854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.091972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.092003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.092172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.092200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.092341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.092385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.092539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.659 [2024-07-11 21:41:04.092570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.659 qpair failed and we were unable to recover it. 00:34:29.659 [2024-07-11 21:41:04.092695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.092724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.092862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.092892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.093010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.093051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.093193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.093223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.093398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.093448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.093568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.093608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.093763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.093792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.093927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.093957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.094103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.094135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.094295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.094327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.094470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.094518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.094667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.094698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.094879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.094908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.095036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.095066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.095237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.095280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.095561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.095594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.095748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.095782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.095913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.095958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.096120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.096152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.096295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.096343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.096536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.096588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.096745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.096778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.096883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.096910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.097014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.097057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.097171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.097200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.097343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.097372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.097525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.097571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.097726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.097778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.097912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.097957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.098066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.098093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.098225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.098252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.098410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.098464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.098623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.098650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.099328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.099364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.099585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.099635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.660 qpair failed and we were unable to recover it. 00:34:29.660 [2024-07-11 21:41:04.099787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.660 [2024-07-11 21:41:04.099815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.099925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.099952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.100092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.100124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.100283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.100313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.100478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.100524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.100668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.100698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.100830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.100859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.100978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.101007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.101121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.101148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.101295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.101326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.101479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.101509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.101656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.101686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.101834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.101875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.101995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.102035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.102206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.102237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.102379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.102409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.102519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.102548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.102691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.102718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.102843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.102870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.102994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.103026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.103199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.103245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.103359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.103388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.103525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.103555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.103715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.103764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.103872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.103898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.104025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.104062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.104183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.104213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.104370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.104419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.104527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.104556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.104704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.104731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.104891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.104931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.105550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.105585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.106261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.106295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.106468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.106517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.106675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.106706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.106851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.106880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.106990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.107018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.107157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.107184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.107336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.107366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.107483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.107513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.661 qpair failed and we were unable to recover it. 00:34:29.661 [2024-07-11 21:41:04.107638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.661 [2024-07-11 21:41:04.107666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.107824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.107852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.107959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.107987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.108097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.108124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.108312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.108341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.108484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.108514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.108623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.108653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.108790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.108818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.108923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.108951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.109054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.109081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.109234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.109268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.109415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.109445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.109565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.109595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.109789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.109831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.109964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.110003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.110118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.110164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.110374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.110407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.110610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.110639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.110763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.110794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.110923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.110950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.111095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.111124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.111259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.111303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.111427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.111456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.111645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.111704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.111863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.111891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.112019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.112068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.112229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.112276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.112438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.112473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.112599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.112626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.112822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.112883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.113022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.113064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.113231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.113261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.113414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.113441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.113623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.113653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.113767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.662 [2024-07-11 21:41:04.113813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.662 qpair failed and we were unable to recover it. 00:34:29.662 [2024-07-11 21:41:04.113940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.113970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.114093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.114122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.114235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.114270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.114379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.114408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.114580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.114610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.114740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.114804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.114968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.115015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.115172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.115215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.115406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.115434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.115537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.115564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.115669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.115697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.115837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.115883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.116006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.116037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.116185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.116212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.116372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.116399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.116510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.116537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.116658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.116696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.116854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.116898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.117036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.117076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.117252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.117302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.117507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.117537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.117671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.117698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.117819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.117847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.117964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.117992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.118174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.118204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.118348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.118396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.118543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.118573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.118720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.118771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.118903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.118929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.119068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.119132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.119322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.119366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.663 qpair failed and we were unable to recover it. 00:34:29.663 [2024-07-11 21:41:04.119514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.663 [2024-07-11 21:41:04.119542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.119676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.119704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.119855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.119909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.120076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.120109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.120278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.120327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.120479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.120505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.120641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.120667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.120807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.120835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.120950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.120977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.121121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.121191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.121345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.121395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.121574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.121623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.121758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.121785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.121893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.121920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.122050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.122080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.122222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.122253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.122395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.122425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.122539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.122569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.122683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.122715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.122854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.122894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.123029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.123063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.123244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.123275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.123434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.123480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.123644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.123673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.123795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.123823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.123937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.123965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.124100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.124127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.124258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.124285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.124412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.124439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.124543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.124571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.124712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.124766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.124902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.124932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.125050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.125080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.125233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.125259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.125441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.125488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.125614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.125645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.125765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.125810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.125929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.664 [2024-07-11 21:41:04.125959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.664 qpair failed and we were unable to recover it. 00:34:29.664 [2024-07-11 21:41:04.126084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.126119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.126257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.126286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.126434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.126463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.126615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.126644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.126788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.126816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.126922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.126950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.127059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.127086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.127209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.127240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.127415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.127447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.127590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.127615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.127778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.127805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.127912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.127938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.128039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.128084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.128261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.128287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.128483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.128509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.128651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.128677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.128784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.128813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.128935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.128961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.129071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.129099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.129259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.129286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.129452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.129479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.129587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.129613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.129750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.129782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.129914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.129941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.130052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.130079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.130217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.130245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.130380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.130407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.130542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.130570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.130696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.130724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.130843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.130873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.131009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.131055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.131219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.131246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.131372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.131400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.131536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.131561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.131679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.131705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.131859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.131887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.132003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.132031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.132142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.132168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.132305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.132333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.665 qpair failed and we were unable to recover it. 00:34:29.665 [2024-07-11 21:41:04.132445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.665 [2024-07-11 21:41:04.132472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.132575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.132601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.132770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.132797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.132904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.132931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.133086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.133253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.133409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.133532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.133688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.133858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.133995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.134023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.134212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.134242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.134390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.134415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.134516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.134541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.134696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.134729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.134854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.134881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.134987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.135182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.135354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.135501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.135647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.135819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.135958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.135984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.136102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.136130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.136282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.136311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.136442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.136469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.136629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.136666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.136805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.136832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.136942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.136972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.137120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.137148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.137283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.137312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.137455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.137481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.137616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.137642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.137756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.137784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.137892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.137920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.138048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.138074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.138223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.138252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.138398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.138424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.138539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.138567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.138711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.138737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.138859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.666 [2024-07-11 21:41:04.138885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.666 qpair failed and we were unable to recover it. 00:34:29.666 [2024-07-11 21:41:04.139014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.139043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.139181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.139225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.139380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.139406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.139552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.139578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.139709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.139736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.139853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.139880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.140052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.140079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.140227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.140272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.140415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.140441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.140566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.140592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.140726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.140761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.140887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.140931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.141054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.141080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.141245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.141270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.141371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.141402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.141508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.141536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.141682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.141722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.141899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.141931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.142108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.142138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.142334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.142362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.142508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.142534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.142626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.142651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.142750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.142782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.142909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.142939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.143112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.143141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.143322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.143351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.143472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.143498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.143658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.143697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.143864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.143893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.144020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.144049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.144214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.144258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.144507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.144562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.144663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.144689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.144830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.144857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.144987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.145013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.145182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.145208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.145329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.145355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.145484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.145511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.145610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.145637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.145817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.667 [2024-07-11 21:41:04.145861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.667 qpair failed and we were unable to recover it. 00:34:29.667 [2024-07-11 21:41:04.145967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.145992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.146132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.146159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.146325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.146350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.146486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.146511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.146614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.146641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.146804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.146829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.146959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.146985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.147124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.147149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.147252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.147277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.147410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.147436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.147569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.147596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.147721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.147747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.147887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.147913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.148960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.148986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.149112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.149265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.149456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.149618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.149760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.149892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.149989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.150120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.150260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.150421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.150583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.150739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.150902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.150929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.151057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.151084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.668 [2024-07-11 21:41:04.151187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.668 [2024-07-11 21:41:04.151213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.668 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.151347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.151387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.151552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.151580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.151744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.151776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.151908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.151934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.152070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.152097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.152248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.152277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.152414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.152441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.152553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.152580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.152719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.152745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.152912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.152942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.153134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.153163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.153319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.153353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.153450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.153478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.153583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.153609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.153718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.153760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.153897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.153924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.154091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.154118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.154224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.154251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.154398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.154424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.154581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.154612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.154729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.154763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.154921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.154948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.155048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.155074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.155203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.155232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.155377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.155403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.155544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.155571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.155708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.155734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.156716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.156769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.156910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.156936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.157070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.157097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.157257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.157284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.157419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.157444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.157583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.157611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.157749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.157783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.157887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.157913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.158060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.158087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.158239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.158269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.158414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.158444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.158590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.158619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.158769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.158796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.669 qpair failed and we were unable to recover it. 00:34:29.669 [2024-07-11 21:41:04.158905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.669 [2024-07-11 21:41:04.158931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.159041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.159067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.159239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.159268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.159412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.159442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.159566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.159592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.159730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.159763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.159903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.159929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.160059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.160084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.160208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.160237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.160406] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.160438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.160542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.160571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.160751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.160782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.160889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.160916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.161040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.161069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.161241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.161270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.161440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.161469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.161613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.161643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.161792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.161820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.161932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.161958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.162066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.162098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.162250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.162281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.162396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.162426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.162576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.162602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.162731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.162766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.162900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.162926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.163055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.163082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.163271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.163300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.163468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.163497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.163663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.163692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.163850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.163877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.164016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.164042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.164150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.164176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.164376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.164405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.164566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.164604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.164760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.164805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.164963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.164989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.165119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.165164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.165330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.165370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.165482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.165511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.165678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.165707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.165854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.670 [2024-07-11 21:41:04.165882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.670 qpair failed and we were unable to recover it. 00:34:29.670 [2024-07-11 21:41:04.166020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.166046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.166190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.166216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.166332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.166360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.166503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.166532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.166644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.166675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.166848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.166874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.167027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.167064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.167242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.167271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.167411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.167437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.167602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.167633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.167816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.167843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.167973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.167999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.168163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.168189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.168344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.168371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.168536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.168562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.168722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.168762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.168887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.168914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.169045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.169071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.169212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.169243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.169413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.169440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.169572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.169598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.169729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.169760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.169894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.169920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.170046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.170187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.170377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.170560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.170743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.170889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.170990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.171016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.171196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.171222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.171353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.171380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.171493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.171520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.171673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.171701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.171855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.171882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.172020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.172047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.172203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.172229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.172337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.671 [2024-07-11 21:41:04.172363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.671 qpair failed and we were unable to recover it. 00:34:29.671 [2024-07-11 21:41:04.172507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.172533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.172698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.172724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.172860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.172887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.172992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.173150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.173294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.173477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.173609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.173769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.173929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.173954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.174057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.174085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.174241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.174273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.174415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.174441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.174573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.174602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.174718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.174764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.174916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.174942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.175078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.175106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.175266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.175292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.175432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.175458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.175587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.175613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.175740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.175775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.175872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.175898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.176955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.176983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.177099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.177127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.177232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.177259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.177362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.177388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.177512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.177537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.177692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.672 [2024-07-11 21:41:04.177718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.672 qpair failed and we were unable to recover it. 00:34:29.672 [2024-07-11 21:41:04.177873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.177914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.178057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.178084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.178241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.178285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.178482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.178528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.178659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.178684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.178799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.178824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.178944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.178975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.179128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.179171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.179303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.179328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.179487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.179513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.179646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.179671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.179817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.179860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.179979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.180022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.180194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.180222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.180390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.180415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.180522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.180548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.180674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.180699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.180843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.180869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.181027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.181182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.181345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.181502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.181657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.181860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.181991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.182017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.182172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.182197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.182351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.182381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.182514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.182540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.182667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.182692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.182858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.182907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.183083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.183126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.183251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.183278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.183450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.183475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.183602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.183628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.183762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.183788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.183919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.183963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.184143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.184189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.184368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.184411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.184570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.184595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.184751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.184783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.673 [2024-07-11 21:41:04.184948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.673 [2024-07-11 21:41:04.184990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.673 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.185156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.185182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.185381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.185427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.185582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.185607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.185724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.185775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.185931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.185974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.186104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.186146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.186293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.186335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.186469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.186496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.186623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.186649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.186791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.186817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.186929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.186955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.187114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.187139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.187247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.187274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.187411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.187436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.187541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.187567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.187724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.187750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.187913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.187956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.188109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.188150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.188268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.188297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.188447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.188472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.188627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.188652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.188760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.188786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.188921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.188963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.189111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.189152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.189284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.189309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.189462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.189492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.189631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.189656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.189790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.189816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.189943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.189968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.190107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.190132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.190294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.190321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.190480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.190505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.190609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.190634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.190734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.190763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.190893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.190935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.191127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.191153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.191280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.674 [2024-07-11 21:41:04.191306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.674 qpair failed and we were unable to recover it. 00:34:29.674 [2024-07-11 21:41:04.191443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.191469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.191601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.191628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.191762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.191787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.191934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.191977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.192127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.192156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.192340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.192365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.192498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.192523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.192618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.192644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.192772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.192797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.192918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.192960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.193115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.193158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.193282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.193307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.193415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.193441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.193573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.193598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.193722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.193747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.193913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.193958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.194116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.194145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.194317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.194345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.194495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.194521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.194646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.194672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.194812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.194837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.194937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.194963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.195103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.195146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.195302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.195327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.195487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.195513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.195646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.195672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.195807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.675 [2024-07-11 21:41:04.195834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.675 qpair failed and we were unable to recover it. 00:34:29.675 [2024-07-11 21:41:04.195987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.196012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.196126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.196172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.196329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.196355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.196510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.196535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.196665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.196690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.196846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.196891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.197041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.197068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.197256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.197298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.197429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.197454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.197585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.197610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.197744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.197775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.197933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.197976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.198157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.198184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.198354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.198381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.198502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.198527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.198639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.198783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.198809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.198993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.199146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.199278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.199434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.199586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.199767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.199945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.199973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.200121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.200164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.200319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.200345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.200445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.200471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.200609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.676 [2024-07-11 21:41:04.200635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.676 qpair failed and we were unable to recover it. 00:34:29.676 [2024-07-11 21:41:04.200787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.200816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.200988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.201031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.201182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.201227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.201395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.201421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.201555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.201581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.201715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.201741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.201866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.201895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.202061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.202106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.202287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.202333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.202493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.202518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.202632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.202658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.202840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.202884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.203028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.203079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.203223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.203257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.203410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.203436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.203593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.203619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.203761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.203787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.203964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.203992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.204191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.204234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.204392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.204418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.204522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.204548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.204669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.204709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.204843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.204886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.205067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.205096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.205238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.205267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.205382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.205411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.205561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.205589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.677 [2024-07-11 21:41:04.205715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.677 [2024-07-11 21:41:04.205741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.677 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.205886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.205911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.206043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.206069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.206206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.206249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.206392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.206420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.206537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.206565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.206719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.206745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.206871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.206910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.207040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.207071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.207189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.207220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.207345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.207371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.207546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.207575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.207697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.207724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.207895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.207930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.208038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.208067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.208210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.208238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.208359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.208387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.208522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.208551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.208695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.208721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.208865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.208893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.209067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.209096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.209269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.209322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.678 [2024-07-11 21:41:04.209500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.678 [2024-07-11 21:41:04.209555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.678 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.209716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.209741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.209855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.209880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.210038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.210063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.210217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.210245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.210394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.210423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.210568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.210599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.210744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.210778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.210927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.210952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.211080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.211123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.211237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.211266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.211391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.211433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.211543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.211571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.211708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.211736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.211900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.211926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.212102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.212132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.212271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.212299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.212462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.212490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.212635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.212665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.212820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.212848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.212971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.212997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.213174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.213203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.213346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.213374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.213578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.213606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.213747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.213796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.213956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.213982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.214127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.214153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.214259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.214300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.214441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.679 [2024-07-11 21:41:04.214469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.679 qpair failed and we were unable to recover it. 00:34:29.679 [2024-07-11 21:41:04.214595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.214620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.214756] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.214783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.214916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.214942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.215079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.215104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.215238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.215282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.215427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.215456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.215652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.215681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.215810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.215836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.215965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.215990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.216146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.216172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.216276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.216320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.216442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.216471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.216609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.216639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.216798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.216825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.216956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.216981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.217117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.217143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.217248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.217293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.217464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.217493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.217729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.217763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.217918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.217943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.218075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.218103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.218295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.218324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.218457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.218485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.218599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.218629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.218774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.218800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.218956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.218981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.219125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.219151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.219337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.680 [2024-07-11 21:41:04.219386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.680 qpair failed and we were unable to recover it. 00:34:29.680 [2024-07-11 21:41:04.219531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.219560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.219721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.219750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.219915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.219941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.220115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.220143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.220254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.220282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.220492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.220520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.220697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.220726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.220888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.220915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.221019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.221045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.221167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.221195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.221339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.221368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.221542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.221571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.221742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.221800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.221931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.221958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.222065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.222091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.222217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.222246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.222403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.222432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.222597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.222625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.222807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.222834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.222941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.222967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.223124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.223150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.223245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.223288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.223417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.223445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.223576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.223601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.223771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.223829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.224007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.224037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.224192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.224218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.224353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.224397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.681 [2024-07-11 21:41:04.224567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.681 [2024-07-11 21:41:04.224596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.681 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.224751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.224783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.224878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.224903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.225085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.225235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.225367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.225526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.225663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.225837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.225988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.226014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.226148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.226174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.226307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.226351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.226505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.226530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.226661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.226686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.226802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.226846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.226989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.227016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.227127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.227154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.227327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.227368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.227529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.227557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.227695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.227724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.227882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.227910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.228065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.228108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.228262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.228288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.228423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.228449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.228600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.228630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.228790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.228816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.228918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.228944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.229069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.229099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.682 [2024-07-11 21:41:04.229230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.682 [2024-07-11 21:41:04.229256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.682 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.229393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.229419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.229514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.229540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.229640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.229666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.229777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.229804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.229909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.229936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.230092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.230118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.230219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.230262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.230443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.230469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.230572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.230598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.230703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.230729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.230893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.230919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.231027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.231052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.231178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.231207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.231388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.231417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.231563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.231588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.231723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.231749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.231948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.231974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.232081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.232107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.232263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.232307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.232439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.232465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.232640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.232668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.232812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.232839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.232966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.232992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.233119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.233144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.233245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.233270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.233419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.233447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.233605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.233631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.233760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.683 [2024-07-11 21:41:04.233800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.683 qpair failed and we were unable to recover it. 00:34:29.683 [2024-07-11 21:41:04.233910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.233938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.234096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.234122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.234252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.234278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.234429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.234458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.234607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.234632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.234743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.234777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.234880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.234905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.235038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.235065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.235192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.235218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.235326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.235352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.235481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.235507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.235664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.235711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.235849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.235875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.236931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.236958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.237096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.237121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.237243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.237274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.237449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.684 [2024-07-11 21:41:04.237475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.684 qpair failed and we were unable to recover it. 00:34:29.684 [2024-07-11 21:41:04.237610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.237636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.237745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.237780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.237920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.237946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.238064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.238106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.238242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.238268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.238373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.238400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.238561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.238587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.238717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.238743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.238875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.238919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.239069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.239113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.239292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.239335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.239498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.239524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.239655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.239681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.239842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.239892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.240018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.240043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.240225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.240269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.240413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.240439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.240567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.240592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.240721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.240747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.240878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.240922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.241050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.241094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.241224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.241251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.241385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.241411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.241567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.241593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.241728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.241761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.241871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.241896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.242027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.242053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.242180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.242206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.242332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.242382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.242548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.242574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.242704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.242730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.242862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.242906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.243028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.243056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.243250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.243292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.243451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.243477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.243634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.243659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.243808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.243838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.244005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.244034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.685 qpair failed and we were unable to recover it. 00:34:29.685 [2024-07-11 21:41:04.244190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.685 [2024-07-11 21:41:04.244232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.244409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.244437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.244596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.244621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.244727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.244758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.244943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.244989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.245180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.245223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.245431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.245474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.245584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.245609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.245741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.245771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.245949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.245992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.246186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.246229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.246358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.246401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.246536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.246561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.246695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.246721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.246850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.246876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.247864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.247976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.248002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.248162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.248188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.248352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.248378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.248519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.248545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.248649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.248675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.248829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.248875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.249033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.249241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.249375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.249533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.249690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.249851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.249984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.250010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.250135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.250160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.250306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.250331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.250434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.250460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.250569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.686 [2024-07-11 21:41:04.250594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.686 qpair failed and we were unable to recover it. 00:34:29.686 [2024-07-11 21:41:04.250697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.250723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.250887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.250912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.251091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.251265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.251408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.251550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.251711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.251871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.251991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.252184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.252321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.252444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.252608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.252772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.252951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.252979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.253151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.253180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.253331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.253359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.253487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.253512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.253634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.253672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.253843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.253875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.254024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.254060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.254224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.254253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.254370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.254400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.254552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.254581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.254713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.254740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.254910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.254937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.255112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.255141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.255352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.255382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.255500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.255530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.255680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.255709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.255869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.255896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.256014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.256060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.256208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.256238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.256381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.256410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.256560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.256602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.256734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.256786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.256969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.256997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.257137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.257165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.257301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.257328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.257444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.257473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.687 [2024-07-11 21:41:04.257607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.687 [2024-07-11 21:41:04.257650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.687 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.257837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.257864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.257965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.257991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.258123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.258150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.258279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.258308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.258456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.258487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.258627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.258656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.258828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.258859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.258981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.259019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.259142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.259173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.259347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.259392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.259582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.259608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.259747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.259780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.259909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.259936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.260064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.260109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.260230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.260260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.260412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.260440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.260539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.260566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.260704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.260731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.260867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.260897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.261041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.261218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.261386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.261556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.261710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.261875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.261976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.262002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.262131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.262173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.262314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.262342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.688 [2024-07-11 21:41:04.262460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.688 [2024-07-11 21:41:04.262491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.688 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.262631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.262660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.262788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.262819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.262922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.262948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.263104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.263130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.263284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.263313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.263456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.263485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.263598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.263627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.263742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.263796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.263938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.263965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.264105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.264131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.264262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.264293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.264408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.264441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.264584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.264614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.264762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.264790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.264901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.264926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.265056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.265085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.265233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.265263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.265413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.265443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.265560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.265589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.265762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.265790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.265895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.265921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.689 [2024-07-11 21:41:04.266058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.689 [2024-07-11 21:41:04.266085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.689 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.266209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.266238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.266382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.266412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.266525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.266554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.266719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.266764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.266905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.266932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.267049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.267079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.267196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.267231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.267376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.267406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.267545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.267574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.267703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.267731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.267848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.267877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.268011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.268036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.268165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.268209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.268351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.268394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.268544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.268587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.268704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.268730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.268867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.268895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.269002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.269028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.269164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.269192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.269378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.269429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.269596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.269651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.269787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.269832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.269988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.270018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.270137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.270166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.270343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.270405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.270561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.270591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.270710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.270736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.690 qpair failed and we were unable to recover it. 00:34:29.690 [2024-07-11 21:41:04.270867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.690 [2024-07-11 21:41:04.270905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.271053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.271081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.271213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.271239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.271391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.271420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.271566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.271595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.271720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.271749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.271888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.271922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.272070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.272100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.272204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.272233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.272353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.272384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.272514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.272559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.272704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.272734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.272871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.272897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.273028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.273055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.273201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.273230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.273371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.273401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.273515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.273545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.273688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.273717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.273863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.273892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.274020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.274051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.274203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.274238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.274396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.274426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.274603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.274667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.274832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.274860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.274968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.274995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.275177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.275205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.275331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.275361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.275506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.691 [2024-07-11 21:41:04.275537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.691 qpair failed and we were unable to recover it. 00:34:29.691 [2024-07-11 21:41:04.275707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.275746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.275871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.275900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.276020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.276050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.276220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.276263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.276407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.276451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.276585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.276612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.276750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.276785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.276944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.276970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.277092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.277122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.277282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.277312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.277440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.277466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.277611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.277637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.277773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.277800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.277958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.277984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.278113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.278142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.278262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.278292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.278437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.278465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.278601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.278630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.278757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.278790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.278930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.278957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.279087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.279116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.279233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.279261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.279389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.279420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.279607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.279654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.279764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.279791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.279900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.279926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.280090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.280133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.280257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.692 [2024-07-11 21:41:04.280300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.692 qpair failed and we were unable to recover it. 00:34:29.692 [2024-07-11 21:41:04.280478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.280523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.280633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.280658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.280791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.280817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.280917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.280944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.281970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.281998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.282119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.282150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.282298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.282327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.282464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.282496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.282650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.282688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.282826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.282855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.283017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.283063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.283248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.283284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.283440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.283470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.283595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.283625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.283774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.283802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.283936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.283962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.284114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.284143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.284250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.284279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.284448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.284477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.284607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.284635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.284795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.284825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.284943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.284972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.285092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.285123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.285299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.285362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.285473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.285502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.285668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.285694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.285874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.285918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.286075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.286118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.286241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.286271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.286399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.286425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.286539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.286565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.286692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.286719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.286855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.286885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.693 [2024-07-11 21:41:04.287001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.693 [2024-07-11 21:41:04.287030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.693 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.287188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.287218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.287345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.287374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.287494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.287524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.287696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.287726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.287919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.287970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.288131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.288174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.288297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.288341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.288528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.288574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.288670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.288695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.288823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.288867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.289014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.289057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.289212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.289255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.289367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.289394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.289530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.289555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.289661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.289688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.289857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.289901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.290073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.290115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.290260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.290297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.290468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.290519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.290703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.290733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.290868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.290901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.291032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.291061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.291178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.291210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.291368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.291398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.291513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.291547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.291683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.291727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.291900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.291939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.292071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.292116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.292241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.292284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.292452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.292479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.292608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.292634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.292741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.292772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.292935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.292978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.293157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.694 [2024-07-11 21:41:04.293201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.694 qpair failed and we were unable to recover it. 00:34:29.694 [2024-07-11 21:41:04.293332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.293375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.293501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.293526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.293656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.293681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.293804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.293833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.293974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.294017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.294139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.294181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.294283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.294309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.294440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.294465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.294576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.294603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.294768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.294811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.294955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.295137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.295290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.295417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.295577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.295698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.295872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.295916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.296041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.296084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.296216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.296241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.296366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.296392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.296523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.296548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.296675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.296701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.296836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.296879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.297966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.297993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.298153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.298180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.298311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.298337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.298452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.298479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.298593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.298620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.298727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.298759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.298909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.298953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.299112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.299156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.299309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.299353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.299511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.299537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.299657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.299685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.299841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.299871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.300005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.300034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.695 [2024-07-11 21:41:04.300143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.695 [2024-07-11 21:41:04.300175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.695 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.300312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.300354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.300499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.300527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.300639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.300666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.300773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.300800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.300908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.300935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.301105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.301135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.301243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.301273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.301393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.301428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.301567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.301598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.301747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.301779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.301877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.301901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.302025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.302068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.302197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.302225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.302402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.302451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.302555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.302585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.302741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.302790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.302929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.302958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.303095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.303126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.303249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.303278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.303432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.303459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.303626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.303652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.303779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.303821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.303983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.304125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.304322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.304496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.304658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.304798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.304958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.304984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.305088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.305116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.305270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.305299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.305434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.305463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.305580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.305617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.305768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.305795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.305950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.305980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.306109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.306153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.306331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.306376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.306500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.306529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.306656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.306681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.306809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.306837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.306938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.306965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.307082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.307110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.307221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.307248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.307395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.696 [2024-07-11 21:41:04.307424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.696 qpair failed and we were unable to recover it. 00:34:29.696 [2024-07-11 21:41:04.307566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.307595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.307722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.307750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.307883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.307911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.308084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.308127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.308252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.308296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.308447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.308491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.308616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.308642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.308786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.308813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.308935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.308977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.309102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.309146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.309282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.309309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.309441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.309466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.309569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.309595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.309724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.309761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.309904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.309948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.310095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.310136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.310256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.310282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.310395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.310420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.310554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.310580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.310680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.310706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.310867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.310912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.311058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.311101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.311229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.311255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.311389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.311415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.311525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.311551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.311652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.311677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.311831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.311875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.312942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.312968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.313103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.313129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.313236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.313280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.313396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.313426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.313588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.313614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.313718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.313744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.313874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.313900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.314051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.314077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.314206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.314235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.314413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.314442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.314568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.697 [2024-07-11 21:41:04.314594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.697 qpair failed and we were unable to recover it. 00:34:29.697 [2024-07-11 21:41:04.314707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.314733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.314898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.314924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.315077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.315107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.315278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.315307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.315421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.315451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.315581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.315610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.315748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.315779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.315936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.315979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.316148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.316270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.316395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.316529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.316666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.316847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.316973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.317006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.317134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.317164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.317278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.317309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.317488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.317518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.317665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.317698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.317866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.317893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.318012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.318048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.318178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.318207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.318378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.318408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.318593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.318642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.318774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.318801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.318920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.318948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.319105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.319149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.319275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.319318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.319471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.319513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.319625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.319650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.319747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.319781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.319877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.319902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.320955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.320989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.321138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.698 [2024-07-11 21:41:04.321167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.698 qpair failed and we were unable to recover it. 00:34:29.698 [2024-07-11 21:41:04.321295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.321325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.321495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.321528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.321677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.321707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.321853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.321883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.322081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.322123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.322252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.322280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.322428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.322471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.322571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.322597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.322702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.322727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.322878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.322922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.323037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.323083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.323227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.323258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.323403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.323433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.323611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.323646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.323773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.323812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.324006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.324036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.324204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.324256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.324397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.324577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.324602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.324741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.324774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.324903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.324948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.325083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.325131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.325294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.325337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.325482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.325528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.325635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.325661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.325794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.325821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.325922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.325949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.326056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.326082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.326217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.326241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.326371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.326397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.326529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.326554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.326660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.326686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.326826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.326869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.327050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.327093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.327248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.327291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.327432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.327457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.327568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.327594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.327733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.327795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.327950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.327980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.328133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.328162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.328337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.328383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.328549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.328575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.328676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.328702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.328812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.328838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.328955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.328999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.329153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.329182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.329364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.329392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.699 qpair failed and we were unable to recover it. 00:34:29.699 [2024-07-11 21:41:04.329517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.699 [2024-07-11 21:41:04.329543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.329674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.329699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.329826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.329871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.329999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.330028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.330223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.330266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.330395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.330421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.330537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.330569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.330706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.330732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.330860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.330903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.331035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.331196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.331363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.331518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.331648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.331824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.331979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.332010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.332123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.332154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.332298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.332328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.332474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.332504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.332630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.332662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.332820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.332850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.333020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.333063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.333212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.333257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.333402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.333446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.333550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.333576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.333707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.333734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.333872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.333904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.334050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.334227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.334411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.334589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.334727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.334890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.334929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2cf20 (9): Bad file descriptor 00:34:29.700 [2024-07-11 21:41:04.335091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.335137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.335295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.335339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.335492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.335535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.335662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.335688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.335831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.335875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.335994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.336022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.336180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.700 [2024-07-11 21:41:04.336208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.700 qpair failed and we were unable to recover it. 00:34:29.700 [2024-07-11 21:41:04.336371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.336400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.336537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.336576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.336715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.336763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.336918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.336947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.337051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.337077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.337190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.337223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.337337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.337364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.337504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.337531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.337667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.337695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.337808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.337835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.338007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.338053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.338215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.338259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.338432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.338458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.338605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.338631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.338750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.338786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.338913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.338956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.339109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.339153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.339283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.339309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.339418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.339444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.339573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.339604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.339766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.339793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.339906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.339932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.340928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.340954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.341096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.341242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.341424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.341560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.341697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.341865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.341999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.342042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.342194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.701 [2024-07-11 21:41:04.342239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.701 qpair failed and we were unable to recover it. 00:34:29.701 [2024-07-11 21:41:04.342342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.342369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.342498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.342525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.342647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.342687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.342869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.342908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.343057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.343087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.343251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.343300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.343466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.343496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.343633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.343663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.343816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.343862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.343981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.344032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.344186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.344227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.344358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.344401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.344562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.344588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.344696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.344722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.344847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.344876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.345000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.345027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.345159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.345185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.345342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.345368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.345532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.345571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.345704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.345732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.345882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.345924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.346099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.346129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.346296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.346325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.346440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.346469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.346643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.346670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.346818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.346863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.346988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.347017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.347193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.347236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.347393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.347436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.347565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.347591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.347693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.347720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.347884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.347928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.348106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.348132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.348260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.348303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.348434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.348460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.348602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.348628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.348814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.348844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.348967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.348994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.349107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.349133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.702 [2024-07-11 21:41:04.349260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.702 [2024-07-11 21:41:04.349286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.702 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.349398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.349437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.349580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.349607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.349716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.349742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.349877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.349921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.350951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.350991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.351147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.351175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.351288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.351315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.351472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.351500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.351630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.351656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.351786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.351813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.351959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.351988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.352129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.352157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.352272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.352457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.352486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.352631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.352657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.352770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.352798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.352930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.352961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.353153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.353182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.353350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.353385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.353518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.353549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.353693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.353722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.353870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.353909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.354068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.354114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.354267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.354310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.354518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.354565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.354693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.354719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.354835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.354863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.354997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.355046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.355197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.355242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.355342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.355368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.355504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.355532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.355666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.355692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.355843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.355881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.356023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.356050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.356204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.356233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.356436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.703 [2024-07-11 21:41:04.356481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.703 qpair failed and we were unable to recover it. 00:34:29.703 [2024-07-11 21:41:04.356658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.356684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.356795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.356822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.356949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.356975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.357145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.357173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.357352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.357395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.357521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.357566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.357723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.357749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.357882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.357931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.358074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.358118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.358276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.358319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.358472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.358532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.358681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.358712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.358917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.358956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.359120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.359150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.359312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.359358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.359490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.359536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.359682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.359708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.359866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.359910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.360051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.360094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.360239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.360268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.360446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.360493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.360628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.360654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.360800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.360831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.361012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.361039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.361167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.361192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.361349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.361375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.361497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.361536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.361701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.361729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.361919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.361958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.362156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.362186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.362355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.362402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.362566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.362611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.362769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.362796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.362951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.362995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.363154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.363200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.363347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.363390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.363495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.363521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.363651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.363678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.363832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.363877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.364021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.364065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.704 qpair failed and we were unable to recover it. 00:34:29.704 [2024-07-11 21:41:04.364232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.704 [2024-07-11 21:41:04.364259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.364387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.364413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.364544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.364571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.364717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.364762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.364907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.364934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.365040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.365065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.365229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.365255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.365387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.365449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.365635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.365663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.365801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.365828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.365935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.365961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.366115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.366144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.366306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.366336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.366490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.366532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.366714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.366742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.366880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.366906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.367046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.367090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.367276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.367321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.367480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.367512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.367697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.367726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.367860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.367886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.368056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.368090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.368251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.368297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.368469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.368498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.368643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.368671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.368824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.368863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.369005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.369050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.369196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.369225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.369421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.369449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.369577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.369622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.369791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.369818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.369954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.369980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.370147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.370175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.370412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.370441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.370588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.370619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.370770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.370813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.370922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.370948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.371078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.371119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.371261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.371289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.371495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.371551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.371724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.371762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.371920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.371947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.372080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.372105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.372234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.372260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.372385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.372413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.372587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.372615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.372731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.372770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.372922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.372948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.373099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.373138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.705 [2024-07-11 21:41:04.373301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.705 [2024-07-11 21:41:04.373348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.705 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.373475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.373519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.373648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.373673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.373815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.373842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.374022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.374064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.374223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.374269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.374460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.374509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.374642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.374670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.374826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.374856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.375008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.375037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.375170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.375199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.375343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.375372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.375509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.375560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.375705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.375734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.375887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.375915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.376049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.376077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.376193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.376221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.376364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.376393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.376577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.376602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.376730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.376764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.376920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.376946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.377064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.377093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.377256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.377285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.377396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.377425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.377596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.377624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.377792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.377834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.377991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.378037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.378190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.378234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.378410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.378455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.378556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.378582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.378724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.378759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.378918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.378962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.379122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.379151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.379322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.379367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.379502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.379529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.379636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.379663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.379814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.379842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.379965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.379990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.380126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.380154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.380314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.380357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.380489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.380516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.380675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.380701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.380808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.380835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.380966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.380992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.706 qpair failed and we were unable to recover it. 00:34:29.706 [2024-07-11 21:41:04.381166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.706 [2024-07-11 21:41:04.381195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.381412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.381463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.381600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.381629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.381746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.381778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.381935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.381961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.382128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.382169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.382350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.382398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.382522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.382569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.382738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.382773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.382920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.382946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.383070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.383099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.383239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.383267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.383435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.383464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.383620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.383646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.383775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.383802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.383935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.383961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.384085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.384115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.384283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.384312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.384502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.384558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.384670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.384698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.384806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.384834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.384990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.385033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.385178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.385213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.385391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.385434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.385544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.385571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.385704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.385730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.385869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.385895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.386041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.386070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.386214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.386242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.386366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.386394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.386564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.386608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.386715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.386741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.386846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.386872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.387024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.387069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.387220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.387262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.387413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.387441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.387592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.387619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.387778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.387805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.387934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.387960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.388102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.388130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.388269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.388299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.388413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.388442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.388559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.388584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.388682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.388708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.388837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.388880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.389025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.389054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.389167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.389195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.389347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.389375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.389587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.389616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.389722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.389765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.389960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.390004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.390184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.390230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.390396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.390423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.390580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.390607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.390737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.390770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.390878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.390904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.707 [2024-07-11 21:41:04.391969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.707 [2024-07-11 21:41:04.391995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.707 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.392132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.392158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.392294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.392319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.392476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.392502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.392636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.392662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.392771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.392797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.392952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.392999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.393132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.393175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.393306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.393332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.393439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.393465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.393598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.393624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.393768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.393794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.393919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.393965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.394111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.394157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.394292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.394319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.394452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.394479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.394587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.394613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.394745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.394776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.394930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.394973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.395156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.395199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.395299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.395325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.395455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.395482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.395613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.395639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.395809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.395852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.395978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.396009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.396156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.396185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.396353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.396381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.396505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.396536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.396663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.396688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.396811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.396841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.396992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.397021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.397166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.397195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.397394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.397439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.397572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.397599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.397740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.397785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.397915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.397945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.398951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.398977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.399131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.399161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.399329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.399372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.399549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.399595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.399694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.399720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.399877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.399922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.400920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.400947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.401957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.401983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.402109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.402135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.402239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.402265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.402396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.402422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.402551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.402578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.402683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.402710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.402877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.402908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.403070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.403097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.403203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.403229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.403371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.403397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.403531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.403557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.403690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.403716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.403852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.403895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.404026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.404053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.404180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.404209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.404369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.404396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.404531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.404557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.404669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.404695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.404847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.404892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.405006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.405035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.405187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.405232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.405410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.405452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.405560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.405586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.405716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.405743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.405910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.405954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.406113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.406163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.406285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.406328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.406461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.406488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.406584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.406610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.406742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.708 [2024-07-11 21:41:04.406777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.708 qpair failed and we were unable to recover it. 00:34:29.708 [2024-07-11 21:41:04.406916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.406960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.407117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.407175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.407312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.407338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.407463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.407493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.407605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.407632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.407731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.407765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.407922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.407966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.408068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.408095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.408198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.408225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.408380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.408406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.408507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.408533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.408674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.408700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.408832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.408875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.409034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.409060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.409242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.409285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.409443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.409468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.709 [2024-07-11 21:41:04.409622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.709 [2024-07-11 21:41:04.409648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.709 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.409758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.409786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.409930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.409974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.410128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.410179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.410336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.410378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.410505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.410531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.410704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.410742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.410887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.410918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.411038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.411067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.411210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.411238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.411355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.411381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.411536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.411564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.411715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.411740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.411880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.411906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.412009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.412053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.412170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.412200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.412335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.412364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.994 qpair failed and we were unable to recover it. 00:34:29.994 [2024-07-11 21:41:04.412532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.994 [2024-07-11 21:41:04.412560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.412682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.412708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.412811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.412839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.412955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.412981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.413147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.413176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.413328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.413372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.413516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.413559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.413693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.413720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.413884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.413910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.414061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.414090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.414282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.414332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.414470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.414495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.414599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.414626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.414731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.414770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.414892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.414922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.415107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.415134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.415287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.415317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.415497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.415525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.415672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.415700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.415859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.415885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.416949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.416976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.417133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.417161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.417330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.417358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.417500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.417529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.417683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.417726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.417869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.417909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.418074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.418118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.418238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.418267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.418497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.418553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.418689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.418716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.418851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.418896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.995 [2024-07-11 21:41:04.419051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.995 [2024-07-11 21:41:04.419080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.995 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.419250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.419292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.419534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.419585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.419751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.419785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.419943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.419972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.420143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.420172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.420352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.420397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.420532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.420558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.420665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.420691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.420834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.420877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.421034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.421066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.421234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.421292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.421441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.421489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.421630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.421664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.421792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.421818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.421941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.421970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.422116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.422144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.422290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.422320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.422489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.422536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.422677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.422706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.422900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.422939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.423070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.423117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.423242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.423286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.423461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.423509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.423616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.423643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.423777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.423805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.423980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.424026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.424173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.424216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.424369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.424412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.424511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.424538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.424702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.424731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.424917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.424946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.425078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.425107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.425247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.425275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.425388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.425418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.425540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.425571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.425724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.425751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.425867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.425893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.426020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.426064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.996 [2024-07-11 21:41:04.426281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.996 [2024-07-11 21:41:04.426330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.996 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.426480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.426513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.426656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.426685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.426835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.426861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.426966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.426992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.427093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.427119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.427271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.427300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.427423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.427451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.427571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.427599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.427726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.427757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.427857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.427883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.428039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.428064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.428193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.428223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.428362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.428391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.428562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.428591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.428774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.428800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.428905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.428932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.429053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.429082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.429260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.429306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.429424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.429453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.429620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.429648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.429796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.429822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.429975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.430001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.430166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.430209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.430380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.430409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.430553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.430581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.430725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.430758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.430910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.430936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.431081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.431114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.431229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.431259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.431398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.431426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.431545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.431573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.431725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.431759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.431908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.431937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.432076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.432106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.432255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.432284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.432471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.432531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.432649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.432677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.432812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.432839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.432983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.433026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.433173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.433201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.997 [2024-07-11 21:41:04.433381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.997 [2024-07-11 21:41:04.433425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.997 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.433535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.433562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.433663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.433689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.433846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.433873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.434026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.434055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.434209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.434238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.434374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.434403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.434537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.434566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.434736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.434772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.434895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.434922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.435084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.435132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.435286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.435315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.435476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.435520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.435627] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.435654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.435811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.435860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.436011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.436055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.436242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.436287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.436393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.436420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.436571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.436610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.436749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.436782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.436918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.436945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.437096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.437125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.437336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.437384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.437494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.437524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.437702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.437729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.437897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.437940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.438104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.438131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.438319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.438371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.438529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.438555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.438715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.438741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.438914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.438953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.439103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.439133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.439250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.439279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.439396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.439425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.439564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.439592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.439741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.439774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.998 qpair failed and we were unable to recover it. 00:34:29.998 [2024-07-11 21:41:04.439931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.998 [2024-07-11 21:41:04.439956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.440110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.440158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.440280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.440323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.440480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.440522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.440651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.440678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.440833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.440878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.441007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.441050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.441152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.441179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.441346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.441372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.441477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.441503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.441669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.441696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.441853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.441882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.442004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.442033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.442171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.442200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.442312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.442340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.442449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.442478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.442634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.442662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.442826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.442853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.443031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.443079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.443230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.443274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.443428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.443471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.443604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.443630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.443784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.443814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.443980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.444009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.444181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.444209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.444408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.444437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.444571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.444599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.444767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.444811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.444956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.444985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.445127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.445155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.445266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.445295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.445484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.445532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.445667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.445693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.445850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.445893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.446037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.446078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.446234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.446278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.446410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.446458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.446562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:29.999 [2024-07-11 21:41:04.446589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:29.999 qpair failed and we were unable to recover it. 00:34:29.999 [2024-07-11 21:41:04.446724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.446749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.446916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.446962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.447110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.447153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.447304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.447346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.447510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.447536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.447670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.447696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.447846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.447875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.448065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.448109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.448278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.448308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.448429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.448458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.448604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.448630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.448794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.448821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.448970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.448999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.449162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.449214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.449372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.449419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.449561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.449589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.449760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.449791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.449905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.449931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.450066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.450110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.450255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.450299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.450475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.450527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.450691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.450717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.450865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.450894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.451026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.451070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.451221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.451264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.451417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.451460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.451603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.451641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.451748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.451780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.451933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.451959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.452109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.452138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.452272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.452301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.452445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.452474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.452618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.452644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.452776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.452803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.452940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.452965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.453106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.453135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.453252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.453281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.453430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.453458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.453584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.453610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.453748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.453786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.453921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.000 [2024-07-11 21:41:04.453948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.000 qpair failed and we were unable to recover it. 00:34:30.000 [2024-07-11 21:41:04.454075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.454101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.454196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.454239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.454347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.454376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.454543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.454572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.454716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.454745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.454887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.454913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.455096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.455149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.455306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.455350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.455500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.455545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.455645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.455672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.455793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.455823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.455989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.456034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.456153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.456197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.456328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.456355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.456518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.456544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.456676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.456703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.456831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.456860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.456979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.457004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.457167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.457195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.457314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.457342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.457518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.457547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.457692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.457718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.457865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.457892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.458015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.458043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.458211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.458239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.458352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.458381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.458550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.458579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.458701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.458730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.458841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.458867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.459024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.459068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.459251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.459281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.459475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.459519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.459651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.459677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.459807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.459839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.459955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.459984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.460101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.460130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.460298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.460326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.460447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.460475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.460641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.460669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.460819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.460846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.001 [2024-07-11 21:41:04.460970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.001 [2024-07-11 21:41:04.460999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.001 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.461173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.461215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.461390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.461444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.461603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.461628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.461800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.461829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.461988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.462036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.462154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.462202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.462383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.462425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.462528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.462555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.462711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.462737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.462879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.462906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.463029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.463056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.463214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.463240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.463372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.463398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.463532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.463558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.463688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.463713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.463866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.463905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.464043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.464071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.464172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.464197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.464346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.464391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.464529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.464555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.464690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.464717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.464905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.464934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.465096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.465139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.465257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.465287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.465437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.465463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.465597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.465622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.465751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.465789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.465912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.465956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.466106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.466149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.466327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.466370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.466483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.466510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.466642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.466668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.466783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.466811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.466973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.466999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.467132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.467158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.467277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.467305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.467447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.467475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.467604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.467648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.467816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.467845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.468026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.002 [2024-07-11 21:41:04.468070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.002 qpair failed and we were unable to recover it. 00:34:30.002 [2024-07-11 21:41:04.468250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.468295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.468424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.468467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.468626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.468651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.468813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.468844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.468983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.469011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.469153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.469186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.469351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.469380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.469519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.469548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.469698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.469738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.469886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.469914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.470075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.470104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.470244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.470272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.470441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.470496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.470621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.470649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.470815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.470854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.471030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.471069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.471227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.471276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.471421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.471464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.471623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.471649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.471786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.471812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.471938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.471968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.472098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.472140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.472294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.472322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.472431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.472460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.472601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.472629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.472746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.472797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.472942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.472970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.473109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.473138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.473279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.473308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.473459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.473491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.473638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.473664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.473775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.473801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.473927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.473975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.474121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.474165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.474314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.474358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.003 [2024-07-11 21:41:04.474493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.003 [2024-07-11 21:41:04.474521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.003 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.474653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.474679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.474831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.474862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.475007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.475035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.475251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.475280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.475421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.475450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.475594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.475623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.475796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.475835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.475999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.476026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.476170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.476219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.476390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.476419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.476590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.476615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.476716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.476743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.476907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.476933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.477083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.477112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.477240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.477282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.477466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.477493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.477662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.477691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.477817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.477843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.477975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.478000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.478148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.478176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.478319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.478347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.478491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.478521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.478674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.478700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.478869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.478909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.479041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.479072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.479271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.479316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.479491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.479547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.479671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.479697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.479852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.479887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.480039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.480081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.480246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.480305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.480454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.480498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.480632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.480659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.480820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.480852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.480966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.480996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.481165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.481193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.481307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.481342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.481482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.481509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.481666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.481692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.481805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.481833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.004 qpair failed and we were unable to recover it. 00:34:30.004 [2024-07-11 21:41:04.481982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.004 [2024-07-11 21:41:04.482010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.482135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.482161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.482318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.482346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.482488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.482516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.482687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.482730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.482917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.482964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.483092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.483135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.483287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.483331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.483496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.483523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.483676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.483701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.483868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.483913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.484091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.484135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.484285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.484327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.484470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.484496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.484606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.484632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.484734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.484768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.484949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.484997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.485147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.485176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.485315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.485358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.485494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.485523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.485656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.485683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.485843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.485873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.485981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.486010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.486196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.486239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.486389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.486419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.486566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.486594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.486720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.486746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.486889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.486915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.487038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.487081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.487207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.487250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.487387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.487431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.487568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.487594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.487707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.487735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.487877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.487904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.488081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.488110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.488230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.488272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.488444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.488472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.488619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.488648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.488822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.488848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.005 qpair failed and we were unable to recover it. 00:34:30.005 [2024-07-11 21:41:04.488999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.005 [2024-07-11 21:41:04.489028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.489201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.489230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.489370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.489398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.489565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.489593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.489697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.489730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.489927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.489966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.490113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.490158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.490339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.490393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.490525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.490552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.490690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.490717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.490861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.490905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.491086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.491116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.491259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.491288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.491440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.491470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.491600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.491628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.491757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.491783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.491937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.491965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.492109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.492138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.492308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.492337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.492538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.492583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.492716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.492742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.492878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.492904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.493056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.493099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.493279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.493321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.493468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.493517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.493618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.493644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.493776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.493802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.493933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.493980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.494140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.494196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.494405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.494449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.494573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.494599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.494736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.494769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.494900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.494942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.495076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.495119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.495233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.495276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.495403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.495429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.495561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.495587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.495708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.006 [2024-07-11 21:41:04.495734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.006 qpair failed and we were unable to recover it. 00:34:30.006 [2024-07-11 21:41:04.495849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.495876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.496059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.496103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.496294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.496325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.496442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.496468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.496603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.496629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.496736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.496770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.496948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.496992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.497131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.497175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.497346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.497391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.497526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.497553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.497653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.497679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.497795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.497823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.497975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.498139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.498348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.498488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.498647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.498808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.498938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.498964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.499097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.499123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.499222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.499249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.499405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.499431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.499539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.499565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.499700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.499726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.499862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.499909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.500971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.500996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.501128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.501153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.501305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.501331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.007 [2024-07-11 21:41:04.501457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.007 [2024-07-11 21:41:04.501483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.007 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.501583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.501609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.501760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.501799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.501940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.501967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.502960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.502989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.503138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.503167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.503308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.503337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.503449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.503477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.503640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.503668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.503781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.503807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.503960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.504006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.504174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.504217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.504341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.504385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.504534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.504573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.504696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.504724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.504841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.504868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.505016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.505044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.505186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.505215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.505348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.505390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.505547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.505576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.505724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.505750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.505868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.505893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.506023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.506051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.506165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.506193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.506333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.506361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.506498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.506530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.506687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.506714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.506836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.506863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.507001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.507046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.507197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.507226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.507398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.507427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.507579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.507605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.507708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.507734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.507859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.507903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.508059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.008 [2024-07-11 21:41:04.508103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.008 qpair failed and we were unable to recover it. 00:34:30.008 [2024-07-11 21:41:04.508255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.508283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.508434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.508459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.508571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.508599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.508732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.508763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.508869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.508894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.509031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.509058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.509180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.509209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.509372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.509417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.509547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.509573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.509680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.509706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.509834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.509877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.510059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.510101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.510220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.510263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.510409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.510453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.510588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.510613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.510775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.510802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.510927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.510971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.511091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.511135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.511291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.511321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.511458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.511483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.511584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.511610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.511739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.511771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.511911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.511954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.512075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.512119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.512282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.512308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.512438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.512465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.512572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.512598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.512724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.512750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.512891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.512936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.513118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.513162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.513340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.513389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.513520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.513545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.513683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.513710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.513846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.513872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.514012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.514038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.514158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.514187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.514308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.514333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.514490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.514515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.514647] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.514673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.514841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.514886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.009 [2024-07-11 21:41:04.515040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.009 [2024-07-11 21:41:04.515084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.009 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.515212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.515255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.515363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.515389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.515548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.515573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.515717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.515743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.515883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.515922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.516050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.516081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.516228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.516258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.516380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.516406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.516534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.516560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.516664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.516691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.516811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.516839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.517019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.517063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.517193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.517221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.517385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.517429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.517616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.517643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.517772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.517798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.517925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.517969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.518105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.518152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.518301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.518344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.518451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.518476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.518583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.518608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.518765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.518791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.518944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.518988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.519136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.519179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.519332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.519357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.519460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.519486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.519642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.519667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.519785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.519844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.519980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.520011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.520135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.520164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.520310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.520339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.520504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.520533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.520666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.520694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.520843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.520872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.521043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.521071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.521189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.521218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.521359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.521405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.521508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.521534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.521667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.521692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.010 [2024-07-11 21:41:04.521819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.010 [2024-07-11 21:41:04.521863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.010 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.521984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.522137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.522262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.522440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.522600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.522733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.522911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.522937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.523084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.523128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.523239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.523264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.523370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.523396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.523525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.523550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.523680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.523706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.523890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.523934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.524073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.524115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.524237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.524435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.524460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.524602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.524627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.524770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.524828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.524988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.525018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.525141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.525170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.525322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.525352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.525520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.525549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.525675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.525700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.525856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.525903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.526060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.526103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.526221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.526264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.526420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.526448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.526566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.526592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.526748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.526778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.526900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.526949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.527065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.527094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.527244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.527269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.527398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.527425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.527553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.527579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.527704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.527730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.527910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.011 [2024-07-11 21:41:04.527953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.011 qpair failed and we were unable to recover it. 00:34:30.011 [2024-07-11 21:41:04.528076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.528106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.528260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.528286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.528430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.528458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.528600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.528629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.528759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.528786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.528916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.528944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.529933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.529959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530792] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.530919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.530945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.531085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.531113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.531225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.531253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.531397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.531427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.531566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.531596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.531735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.531770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.531898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.531923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.532082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.532297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.532468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.532622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.532759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.532886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.532992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.533018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.533124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.533149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.012 qpair failed and we were unable to recover it. 00:34:30.012 [2024-07-11 21:41:04.533278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.012 [2024-07-11 21:41:04.533305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.533411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.533446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.533604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.533630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.533744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.533776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.533880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.533906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534545] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.534856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.534987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.535144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.535300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.535434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.535585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.535770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.535940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.535969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.536105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.536133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.536254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.536284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.536396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.536425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.536552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.536578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.536688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.536714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.536845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.536889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.537014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.537057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.537236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.537283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.537404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.537432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.537575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.537608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.537712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.537738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.537879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.537908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.538034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.538062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.538180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.538210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.538353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.538381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.538519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.538549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.538671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.538696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.538854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.538898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.539049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.539094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.539241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.539285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.539392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.539418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.013 qpair failed and we were unable to recover it. 00:34:30.013 [2024-07-11 21:41:04.539523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.013 [2024-07-11 21:41:04.539550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.539681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.539708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.539835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.539864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.540020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.540049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.540203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.540232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.540342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.540372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.540519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.540548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.540687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.540717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.540843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.540886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.541005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.541034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.541170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.541199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.541321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.541351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.541491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.541522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.541699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.541725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.541855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.541899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.542037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.542086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.542268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.542311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.542438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.542466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.542594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.542621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.542784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.542811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.542920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.542946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.543104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.543132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.543271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.543300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.543410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.543438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.543580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.543608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.543764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.543808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.543909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.543935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.544074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.544104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.544219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.544248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.544359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.544388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.544575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.544621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.544723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.544750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.544895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.544921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.545063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.545091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.545259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.545302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.545452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.545496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.545602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.545628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.545790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.545817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.545923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.545949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.546071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.014 [2024-07-11 21:41:04.546100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.014 qpair failed and we were unable to recover it. 00:34:30.014 [2024-07-11 21:41:04.546240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.546269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.546387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.546417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.546599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.546631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.546748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.546781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.546929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.546972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.547097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.547125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.547319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.547363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.547466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.547492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.547595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.547620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.547745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.547784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.547930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.547975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.548131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.548174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.548338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.548365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.548522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.548547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.548650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.548678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.548785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.548811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.548942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.548968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.549101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.549130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.549237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.549265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.549405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.549433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.549561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.549588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.549697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.549723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.549847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.549891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.550911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.550958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.551132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.551179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.551309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.551353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.551488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.551513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.551620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.551647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.551780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.551806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.551947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.551991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.015 [2024-07-11 21:41:04.552109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.015 [2024-07-11 21:41:04.552137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.015 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.552311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.552336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.552473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.552501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.552631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.552657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.552774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.552800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.552933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.552977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.553092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.553139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.553276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.553301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.553405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.553431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.553564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.553590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.553720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.553746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.553917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.553961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.554114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.554140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.554284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.554327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.554428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.554453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.554556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.554581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.554708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.554734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.554863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.554910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.555085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.555291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.555430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.555589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.555719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.555867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.555975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.556134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.556275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.556431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.556558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.556699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.556886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.556913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.557017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.557043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.016 qpair failed and we were unable to recover it. 00:34:30.016 [2024-07-11 21:41:04.557167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.016 [2024-07-11 21:41:04.557196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.557313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.557347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.557523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.557552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.557674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.557701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.557831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.557876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.558972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.558997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.559133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.559159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.559294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.559320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.559424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.559449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.559576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.559614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.559726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.559759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.559875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.559901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.560054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.560180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.560331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.560511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.560661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.560824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.560985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.561014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.561150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.561193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.561336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.561364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.561506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.561535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.561662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.561689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.561798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.561824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.561977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.562023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.562184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.562229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.562359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.562404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.562510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.562535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.562693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.562719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.562859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.562885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.563008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.563036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.017 [2024-07-11 21:41:04.563155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.017 [2024-07-11 21:41:04.563181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.017 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.563294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.563319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.563452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.563478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.563593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.563618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.563748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.563782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.563915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.563941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.564909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.564934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.565908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.565935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.566059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.566087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.566230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.566260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.566414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.566442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.566595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.566623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.566723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.566749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.566864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.566889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.567973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.567999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.568129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.568156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.568265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.568290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.568449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.568475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.568575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.568602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.018 qpair failed and we were unable to recover it. 00:34:30.018 [2024-07-11 21:41:04.568712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.018 [2024-07-11 21:41:04.568737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.568900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.568931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.569076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.569248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.569394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.569577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.569705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.569855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.569984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.570010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.570143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.570171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.570310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.570339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.570452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.570481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.570637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.570666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.570850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.570879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.571007] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.571051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.571223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.571249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.571434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.571477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.571630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.571655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.571759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.571786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.571901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.571929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.572077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.572106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.572250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.572283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.572405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.572434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.572607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.572635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.572767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.572811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.572958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.572986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.573128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.573156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.573295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.573323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.573468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.573499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.573644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.573669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.573793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.573820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.573978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.574022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.574177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.574206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.574335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.574377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.574548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.574575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.574704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.574730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.574843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.574869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.574985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.575014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.019 qpair failed and we were unable to recover it. 00:34:30.019 [2024-07-11 21:41:04.575137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.019 [2024-07-11 21:41:04.575166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.575284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.575313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.575453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.575481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.575637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.575662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.575770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.575797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.575906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.575931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.576028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.576054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.576171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.576199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.576341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.576369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.576512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.576542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.576655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.576688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.576826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.576854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.577010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.577055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.577187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.577231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.577375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.577417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.577548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.577574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.577678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.577703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.577858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.577902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.578068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.578112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.578248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.578276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.578411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.578438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.578606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.578632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.578740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.578775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.578917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.578943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.579068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.579097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.579238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.579266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.579408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.579437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.579552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.579581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.579699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.579728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.579858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.579886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.580018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.580061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.580186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.580231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.580357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.580385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.580540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.580565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.580673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.580698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.580852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.580879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.581006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.581034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.581192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.581223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.581355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.581385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.581530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.581560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.020 [2024-07-11 21:41:04.581731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.020 [2024-07-11 21:41:04.581767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.020 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.581911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.581939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.582048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.582077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.582210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.582239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.582359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.582388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.582528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.582556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.582696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.582724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.582869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.582897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.583044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.583088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.583235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.583279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.583393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.583437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.583576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.583602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.583702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.583728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.583880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.583910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.584029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.584058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.584165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.584193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.584330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.584358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.584502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.584531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.584639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.584667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.584834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.584861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.585008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.585053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.585232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.585260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.585403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.585430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.585578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.585605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.585770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.585797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.585954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.585982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.586172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.586215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.586383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.586409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.586544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.586571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.586677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.586703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.586835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.586864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.586976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.587005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.587121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.587150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.587293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.587322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.587484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.587537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.587695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.021 [2024-07-11 21:41:04.587721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.021 qpair failed and we were unable to recover it. 00:34:30.021 [2024-07-11 21:41:04.587857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.587883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.588036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.588080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.588240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.588284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.588426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.588469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.588601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.588628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.588733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.588766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.588865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.588891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.589948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.589977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.590143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.590172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.590301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.590330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.590505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.590533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.590673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.590701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.590852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.590878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.591035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.591079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.591206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.591251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.591373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.591403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.591550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.591575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.591705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.591730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.591861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.591906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.592054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.592082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.592277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.592320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.592466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.592509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.592667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.592697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.592832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.592869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.593871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.593897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.594022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.594051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.594172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.594198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.594358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.022 [2024-07-11 21:41:04.594387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.022 qpair failed and we were unable to recover it. 00:34:30.022 [2024-07-11 21:41:04.594555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.594584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.594739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.594770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.594900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.594943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.595060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.595090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.595203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.595232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.595370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.595398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.595511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.595539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.595722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.595748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.595861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.595887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.596964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.596990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.597170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.597198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.597301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.597330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.597471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.597499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.597625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.597651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.597803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.597829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.597957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.597983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.598091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.598116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.598320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.598348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.598584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.598612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.598759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.598804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.598913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.598938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.599117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.599146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.599287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.599315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.599454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.599488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.599651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.599691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.599811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.599840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.599966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.600009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.600156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.600185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.600352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.600395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.600553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.600578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.600685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.600713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.600909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.600954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.601083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.601126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.601279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.601321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.023 [2024-07-11 21:41:04.601452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.023 [2024-07-11 21:41:04.601477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.023 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.601607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.601633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.601782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.601843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.602027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.602057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.602231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.602259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.602398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.602426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.602570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.602596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.602707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.602733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.602878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.602906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.603042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.603069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.603205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.603232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.603363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.603389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.603501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.603527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.603659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.603684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.603835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.603865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.604065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.604225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.604357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.604516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.604651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.604857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.604986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.605016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.605138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.605164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.605297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.605326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.605463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.605492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.605631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.605660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.605804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.605833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.605988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.606014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.606137] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.606166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.606307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.606341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.606486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.606515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.606658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.606687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.606851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.606880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.607048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.607077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.607247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.607275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.607410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.607438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.607582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.607612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.607764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.607808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.607941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.607966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.608097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.608142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.608314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.608342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.024 [2024-07-11 21:41:04.608521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.024 [2024-07-11 21:41:04.608550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.024 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.608718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.608746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.608910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.608937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.609049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.609074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.609224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.609253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.609421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.609450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.609567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.609595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.609735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.609769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.609893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.609919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.610061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.610089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.610257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.610285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.610401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.610430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.610621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.610679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.610783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.610811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.610946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.610972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.611100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.611143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.611275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.611302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.611479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.611523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.611654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.611680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.611834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.611863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.612028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.612072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.612230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.612255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.612408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.612434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.612559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.612586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.612720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.612745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.612903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.612947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.613093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.613136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.613373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.613418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.613521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.613547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.613664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.613690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.613808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.613853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.614012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.614038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.614143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.614170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.614280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.614307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.614465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.614491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.025 qpair failed and we were unable to recover it. 00:34:30.025 [2024-07-11 21:41:04.614619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.025 [2024-07-11 21:41:04.614645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.614764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.614790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.614922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.614965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.615140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.615168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.615284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.615309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.615466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.615492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.615626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.615652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.615824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.615872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.616027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.616069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.616223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.616266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.616397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.616423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.616556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.616581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.616707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.616732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.616887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.616935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.617970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.617997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.618160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.618187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.618317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.618343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.618449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.618475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.618628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.618655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.618802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.618832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.619025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.619071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.619219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.619248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.619450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.619493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.619625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.619650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.619779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.619806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.619959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.620002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.620121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.620149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.620323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.620367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.620503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.620529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.620659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.620684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.620837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.620882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.621061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.621104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.621255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.621297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.621455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.621480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.621583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.026 [2024-07-11 21:41:04.621609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.026 qpair failed and we were unable to recover it. 00:34:30.026 [2024-07-11 21:41:04.621710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.621736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.621870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.621900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.622046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.622200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.622356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.622515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.622700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.622863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.622979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.623024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.623146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.623187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.623343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.623368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.623515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.623541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.623676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.623701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.623848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.623892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.624041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.624085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.624224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.624267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.624395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.624420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.624555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.624580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.624711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.624736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.624921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.624950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.625090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.625120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.625305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.625332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.625460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.625487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.625642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.625667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.625814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.625843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.625996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.626179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.626363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.626495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.626632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.626815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.626948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.626974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.627103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.627129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.627288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.627313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.627445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.627471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.627608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.627634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.627774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.627801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.627917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.627961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.628096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.628121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.628249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.628274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.628380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.027 [2024-07-11 21:41:04.628406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.027 qpair failed and we were unable to recover it. 00:34:30.027 [2024-07-11 21:41:04.628540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.628567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.628727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.628765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.628889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.628918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.629112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.629155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.629259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.629285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.629438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.629468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.629605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.629630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.629765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.629791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.629973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.630018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.630152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.630195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.630326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.630351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.630486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.630512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.630665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.630690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.630837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.630884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.631040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.631082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.631268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.631314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.631472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.631498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.631631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.631656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.631763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.631790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.631946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.631990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.632110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.632152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.632282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.632309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.632446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.632472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.632628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.632653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.632772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.632830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.632984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.633014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.633168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.633197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.633341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.633370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.633540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.633568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.633725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.633757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.633899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.633928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.634070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.634099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.634211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.634245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.634363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.634391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.634534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.634563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.634707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.634736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.634884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.634915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.635063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.635092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.635233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.635262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.635432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.635464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.028 [2024-07-11 21:41:04.635638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.028 [2024-07-11 21:41:04.635664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.028 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.635844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.635888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.636001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.636029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.636219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.636248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.636422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.636467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.636603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.636630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.636770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.636797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.636898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.636924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.637052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.637080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.637251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.637280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.637416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.637445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.637554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.637583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.637702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.637732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.637873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.637898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.638053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.638221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.638361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.638525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.638677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.638865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.638999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.639025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.639154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.639197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.639341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.639369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.639543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.639574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.639704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.639730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.639861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.639887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.640018] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.640044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.640179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.640223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.640400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.640426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.640570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.640595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.640707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.640732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.640868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.640894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.641040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.641068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.641218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.641249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.641365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.641394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.641566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.641595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.641758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.029 [2024-07-11 21:41:04.641787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.029 qpair failed and we were unable to recover it. 00:34:30.029 [2024-07-11 21:41:04.641903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.641929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.642052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.642080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.642199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.642228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.642374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.642403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.642543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.642571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.642746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.642779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.642927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.642952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.643062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.643088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.643221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.643247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.643343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.643372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.643496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.643555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.643699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.643728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.643879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.643923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.644048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.644091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.644215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.644241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.644363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.644406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.644563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.644589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.644718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.644744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.644871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.644897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.645028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.645054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.645158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.645184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.645336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.645377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.645510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.645537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.645678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.645705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.645863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.645908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.646020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.646063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.646237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.646280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.646388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.646414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.646570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.646595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.646700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.646726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.646842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.646867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.647020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.647046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.647177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.647203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.647333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.647358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.647543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.647569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.647702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.647729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.647861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.647891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.648079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.648118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.648275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.648318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.030 [2024-07-11 21:41:04.648453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.030 [2024-07-11 21:41:04.648479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.030 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.648591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.648630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.648774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.648820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.648937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.648966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.649113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.649141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.649273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.649314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.649456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.649486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.649639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.649667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.649816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.649846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.649988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.650148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.650302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.650459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.650593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.650749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.650893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.650938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.651087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.651132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.651287] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.651313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.651415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.651442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.651574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.651602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.651765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.651809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.651918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.651946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.652093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.652123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.652278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.652306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.652424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.652454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.652609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.652635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.652743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.652785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.652941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.652967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.653114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.653143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.653310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.653339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.653479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.653507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.653631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.653656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.653758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.653785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.653921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.653946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.654098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.654126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.654293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.654322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.654447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.654492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.654611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.654643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.654781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.654808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.654914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.654940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.655060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.655088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.031 [2024-07-11 21:41:04.655227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.031 [2024-07-11 21:41:04.655256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.031 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.655390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.655419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.655564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.655592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.655731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.655765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.655915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.655940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.656066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.656098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.656235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.656278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.656432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.656475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.656605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.656633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.656738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.656769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.656903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.656945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.657121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.657302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.657460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.657586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.657713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.657874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.657995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.658163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.658333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.658477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.658648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.658797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.658954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.658988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.659153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.659182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.659290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.659319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.659491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.659522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.659669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.659695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.659822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.659847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.659971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.660015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.660166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.660210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.660349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.660393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.660522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.660547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.660681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.660708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.660840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.660883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.661036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.661065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.661201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.661230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.661396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.661424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.661553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.661580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.661741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.661774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.661906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.661935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.032 [2024-07-11 21:41:04.662078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.032 [2024-07-11 21:41:04.662108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.032 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.662254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.662283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.662423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.662451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.662610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.662637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.662744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.662784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.662967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.663013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.663169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.663211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.663363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.663407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.663510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.663536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.663664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.663707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.663823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.663851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.663972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.664001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.664152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.664181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.664322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.664351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.664497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.664525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.664675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.664702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.664835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.664879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.665021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.665064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.665216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.665258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.665409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.665454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.665610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.665635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.665772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.665817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.665970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.665996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.666121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.666151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.666320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.666349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.666467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.666496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.666665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.666694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.666840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.666871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.667046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.667090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.667248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.667289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.667430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.667473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.667625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.667651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.667796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.667827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.667976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.668006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.668147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.668175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.668321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.668349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.668498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.668528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.668680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.668705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.668856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.668885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.669061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.669091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.669232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.669276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.033 [2024-07-11 21:41:04.669396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.033 [2024-07-11 21:41:04.669439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.033 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.669565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.669591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.669720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.669747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.669935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.669980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.670146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.670172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.670299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.670324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.670425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.670450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.670583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.670611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.670745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.670795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.670920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.670949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.671092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.671121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.671291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.671320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.671436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.671465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.671637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.671664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.671774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.671801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.671947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.671991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.672159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.672186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.672338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.672384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.672514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.672540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.672677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.672704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.672862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.672893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.673027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.673056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.673243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.673292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.673482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.673510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.673659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.673685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.673867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.673911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.674058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.674101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.674252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.674295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.674451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.674480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.674603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.674629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.674775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.674801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.674952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.674997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.675149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.675191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.675354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.675380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.675517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.675542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.034 [2024-07-11 21:41:04.675669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.034 [2024-07-11 21:41:04.675700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.034 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.675857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.675900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.676058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.676102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.676266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.676294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.676449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.676475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.676632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.676657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.676797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.676826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.676991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.677034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.677217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.677261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.677417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.677443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.677602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.677626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.677804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.677848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.677970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.678167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.678373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.678496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.678632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.678763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.678921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.678946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.679949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.679992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.680169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.680213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.680354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.680390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.680529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.680555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.680663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.680688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.680843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.680872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.680985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.681013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.681184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.681213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.681353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.681380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.681559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.681589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.681711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.681767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.681957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.682002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.682183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.682226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.682411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.682464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.682569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.682594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.682722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.682747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.682856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.682883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.683008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.035 [2024-07-11 21:41:04.683052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.035 qpair failed and we were unable to recover it. 00:34:30.035 [2024-07-11 21:41:04.683181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.683210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.683375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.683412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.683560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.683585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.683693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.683719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.683847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.683877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.684898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.684924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.685929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.685957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.686093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.686119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.686253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.686280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.686380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.686406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.686520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.686559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.686698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.686727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.686843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.686875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.687037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.687063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.687239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.687268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.687385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.687415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.687540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.687570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.687715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.687741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.687880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.687907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.688062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.688230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.688399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.688556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.688702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.688870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.688974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.689158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.689298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.689450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.689636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.689781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.036 [2024-07-11 21:41:04.689926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.036 [2024-07-11 21:41:04.689953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.036 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.690134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.690163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.690274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.690302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.690420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.690448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.690584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.690623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.690742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.690775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.690888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.690915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.691062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.691108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.691250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.691299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.691445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.691488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.691619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.691646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.691750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.691780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.691888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.691914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.692037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.692065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.692209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.692237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.692376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.692404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.692555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.692580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.692712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.692737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.692860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.692897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.693052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.693086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.693249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.693279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.693421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.693449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.693570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.693598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.693719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.693745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.693888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.693914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.694069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.694208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.694366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.694586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.694725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.694867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.694967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.695133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.695292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.695464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.695636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.695820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.695957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.695983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.696106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.696135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.696262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.696304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.696457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.696485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.696604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.696632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.696778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.696804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.696955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.696984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.037 [2024-07-11 21:41:04.697101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.037 [2024-07-11 21:41:04.697130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.037 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.697272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.697300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.697456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.697485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.697600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.697628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.697776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.697820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.697955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.697981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.698093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.698121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.698261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.698289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.698431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.698460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.698578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.698606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.698762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.698788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.698900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.698926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.699075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.699103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.699229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.699272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.699417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.699446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.699549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.699578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.699721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.699746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.699856] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.699881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700040] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.700072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.700220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.700397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.700570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.700722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.700861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.700987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.701170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.701308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.701476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.701652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.701814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.701968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.701995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.038 [2024-07-11 21:41:04.702168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.038 [2024-07-11 21:41:04.702209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.038 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.702327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.702358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.702523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.702552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.702670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.702701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.702854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.702893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.703073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.703230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.703433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.703587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.703725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.703886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.703989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.704135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.704280] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.704429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.704557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.704709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.704875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.704901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.705062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.705104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.705284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.705330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.705463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.705488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.705601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.705626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.705791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.705835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.705983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.706013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.706160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.706191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.706333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.706361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.706503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.706532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.706690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.706716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.706831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.706859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.707058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.707235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.707399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.707562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.707689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.707849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.707973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.708013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.039 [2024-07-11 21:41:04.708123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.039 [2024-07-11 21:41:04.708152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.039 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.708324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.708354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.708518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.708547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.708694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.708723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.708887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.708914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.709043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.709070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.709185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.709214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.709363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.709392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.709574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.709631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.709791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.709819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.709932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.709960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.710094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.710122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.710224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.710250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.710404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.710453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.710591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.710617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.710721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.710746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.710906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.710951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.711103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.711148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.711274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.711323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.711491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.711521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.711691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.711720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.711853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.711897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.712043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.712072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.712181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.712212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.712357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.712386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.712539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.712567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.712696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.712721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.712882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.712926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.713085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.713114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.713282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.713325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.713454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.713479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.713589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.713615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.713729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.713762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.713875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.713901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.714061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.714091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.714207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.714236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.714377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.714405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.714593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.714638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.714775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.714804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.714926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.714969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.715095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.040 [2024-07-11 21:41:04.715123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.040 qpair failed and we were unable to recover it. 00:34:30.040 [2024-07-11 21:41:04.715298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.715326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.715521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.715565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.715668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.715695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.715828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.715855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.715963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.715990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.716113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.716142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.716283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.716312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.716455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.716484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.716629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.716656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.716760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.716786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.716914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.716942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.717109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.717153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.717282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.717326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.717470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.717514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.717646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.717672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.717816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.717859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.717982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.718150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.718301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.718464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.718596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.718740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.718903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.718928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.719045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.719073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.719190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.719217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.719393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.719421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.719590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.719615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.719715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.719743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.719867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.719893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.720042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.720071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.720213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.720242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.720389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.720418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.720560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.720588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.720781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.720821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.720932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.720962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.721130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.041 [2024-07-11 21:41:04.721174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.041 qpair failed and we were unable to recover it. 00:34:30.041 [2024-07-11 21:41:04.721288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.721318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.721426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.721456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.721600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.721629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.721779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.721836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.721974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.722001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.722157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.722186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.722319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.722344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.722504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.722533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.722681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.722711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.722860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.722887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.723015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.723041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.723198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.723228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.723368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.723397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.723539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.723567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.723763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.723802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.723941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.723980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.724097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.724125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.724232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.724260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.724385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.724416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.724527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.724570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.724705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.724734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.724865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.724891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.725058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.725084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.725209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.725238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.725375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.725418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.725550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.725575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.725710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.725737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.725857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.725883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.726016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.726043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.726208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.726234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.726368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.726394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.726564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.726593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.726766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.726796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.726924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.726950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.042 [2024-07-11 21:41:04.727058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.042 [2024-07-11 21:41:04.727085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.042 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.727243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.727269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.727433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.727491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.727604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.727633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.727745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.727776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.727930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.727975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.728123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.728152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.728294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.728337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.728441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.728468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.728598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.728624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.728750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.728782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.728905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.728949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.729067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.729110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.729259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.729302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.729436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.729462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.729565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.729590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.729698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.729723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.729874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.729906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.730050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.730079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.730242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.730271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.730402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.730431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.730601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.730630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.730767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.730794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.730927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.730956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.731100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.731129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.731266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.731296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.731430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.731462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.731634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.731663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.731787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.731832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.731967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.731996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.732140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.732169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.732301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.732330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.732440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.732470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.732610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.732640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.732776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.732832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.732947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.732975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.733087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.733114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.733268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.733296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.733453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.733481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.043 qpair failed and we were unable to recover it. 00:34:30.043 [2024-07-11 21:41:04.733617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.043 [2024-07-11 21:41:04.733643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.733761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.733789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.733929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.733960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.734114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.734142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.734251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.734281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.734395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.734425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.734571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.734600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.734715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.734744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.734870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.734896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.735025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.735051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.735177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.735203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.735342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.735371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.735518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.735547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.735724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.735769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.735930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.735960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.736082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.736111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.736226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.736256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.736426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.736454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.736613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.736680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.736814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.736841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.044 [2024-07-11 21:41:04.736949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.044 [2024-07-11 21:41:04.736975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.044 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.737120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.737163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.737279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.737308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.737420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.737449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.737596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.737625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.737759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.737786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.737889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.737931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.738089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.738117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.738258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.738287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.738486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.738520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.738661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.738691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.738812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.738857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.738964] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.738990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.739119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.739162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.739296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.739322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.739484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.739514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.739657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.739691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.739859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.739885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.739993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.740020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.740146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.740174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.740341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.740371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.740501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.740531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.740667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.740698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.740894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.740921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.741970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.741998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.742128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.742155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.742291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.742317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.742453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.742481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.742585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.742612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.742730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.742763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.742875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.742906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.330 [2024-07-11 21:41:04.743949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.330 [2024-07-11 21:41:04.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.330 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.744072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.744099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.744229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.744255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.744431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.744460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.744616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.744659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.744794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.744822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.744960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.744987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.745101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.745128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.745286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.745313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.745471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.745497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.745628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.745655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.745795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.745821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.745929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.745955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.746054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.746079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.746209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.746235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.746364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.746407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.746521] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.746565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.746662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.746690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.746826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.746852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.747061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.747218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.747349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.747505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.747711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.747870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.747986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.748029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.748183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.748209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.748353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.748381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.748531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.748557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.748700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.748725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.748855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.748881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.749014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.749040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.749184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.749210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.749368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.749409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.749592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.749620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.749773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.749799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.749904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.749929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.750925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.750951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.751049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.751075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.751205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.751230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.751351] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.751379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.331 [2024-07-11 21:41:04.751505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.331 [2024-07-11 21:41:04.751535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.331 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.751663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.751690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.751879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.751918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.752033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.752061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.752194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.752222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.752357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.752383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.752516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.752543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.752675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.752700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.752833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.752876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.753053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.753079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.753225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.753253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.753396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.753425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.753572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.753597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.753726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.753757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.753936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.753962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.754086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.754254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.754415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.754586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.754747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.754877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.754982] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.755009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.755168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.755212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.755370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.755421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.755591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.755620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.755778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.755822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.755930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.755956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.756088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.756117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.756274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.756318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.756434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.756464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.756643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.756669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.756810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.756839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.756961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.756989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.757155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.757288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.757446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.757582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.757743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.757872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.757997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.758022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.758154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.758198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.758390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.758416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.758549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.758574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.332 qpair failed and we were unable to recover it. 00:34:30.332 [2024-07-11 21:41:04.758704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.332 [2024-07-11 21:41:04.758731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.758891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.758936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.759097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.759127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.759238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.759265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.759392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.759418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.759570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.759599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.759748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.759802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.759935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.759961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.760092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.760118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.760274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.760300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.760427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.760454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.760584] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.760614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.760717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.760744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.760894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.760920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.761030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.761055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.761188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.761214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.761368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.761397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.761549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.761575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.761699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.761724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.761878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.761906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.762905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.762932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.763070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.763096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.763227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.763254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.763380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.763406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.763559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.763585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.763729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.763764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.763942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.763969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.764077] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.764103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.764238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.764265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.764400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.764426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.764556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.764582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.764712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.764739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.764907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.764938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.333 [2024-07-11 21:41:04.765066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.333 [2024-07-11 21:41:04.765094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.333 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.765192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.765219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.765353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.765380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.765480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.765507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.765652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.765691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.765840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.765868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.765980] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.766025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.766142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.766172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.766334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.766361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.766486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.766512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.766670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.766698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.766889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.766916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.767011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.767037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.767201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.767230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.767388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.767414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.767547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.767572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.767725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.767772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.767936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.767962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.768093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.768119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.768275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.768305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.768454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.768480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.768611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.768638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.768772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.768802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.768922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.768948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.769085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.769111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.769265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.769293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.769438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.769467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.769566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.769592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.769761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.769804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.769934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.769959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.770092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.770117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.770225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.770250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.770381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.770406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.770533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.770576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.770697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.770726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.770890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.770916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.771021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.771046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.771176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.771218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.771347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.771373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.771474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.771499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.771650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.771689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.771839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.771868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.772003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.772030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.772138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.772165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.772294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.772319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.334 [2024-07-11 21:41:04.772454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.334 [2024-07-11 21:41:04.772480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.334 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.772607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.772634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.772768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.772795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.772959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.772985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.773113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.773139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.773270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.773296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.773433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.773458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.773578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.773607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.773748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.773803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.773933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.773959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.774090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.774116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.774227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.774254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.774383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.774410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.774541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.774567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.774692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.774718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.774852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.774880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.775971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.775997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.776156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.776182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.776343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.776369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.776500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.776526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.776677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.776716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.776845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.776874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.777029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.777073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.777312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.777341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.777476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.777505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.777658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.777684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.777864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.777895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.778017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.778046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.778202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.778232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.778397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.778437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.778621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.778650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.778815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.778842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.778966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.778998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.779172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.779201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.779346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.779375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.779551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.779581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.779706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.779732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.779895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.779921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.780084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.780124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.780309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.780339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.780455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.780484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.780614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.780642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.335 [2024-07-11 21:41:04.780781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.335 [2024-07-11 21:41:04.780812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.335 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.780946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.780973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.781139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.781165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.781320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.781349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.781488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.781518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.781666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.781692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.781820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.781846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.781975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.782003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.782179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.782229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.782400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.782429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.782539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.782570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.782719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.782745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.782892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.782919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.783052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.783079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.783278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.783335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.783504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.783534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.783715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.783745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.783915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.783941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.784043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.784069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.784199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.784225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.784383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.784430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.784592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.784638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.784741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.784775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.784925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.784967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.785144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.785192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.785361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.785415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.785549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.785576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.785714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.785740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.785889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.785928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.786067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.786097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.786268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.786297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.786402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.786431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.786616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.786664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.786769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.786797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.786941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.786986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.787161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.787212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.787385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.787429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.787563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.787589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.787727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.787765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.787871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.787913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.788061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.788099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.788276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.336 [2024-07-11 21:41:04.788326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.336 qpair failed and we were unable to recover it. 00:34:30.336 [2024-07-11 21:41:04.788470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.788499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.788663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.788691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.788843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.788869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.788995] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.789021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.789190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.789229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.789440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.789468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.789612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.789640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.789771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.789798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.789924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.789950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.790125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.790153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.790354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.790409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.790590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.790619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.790744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.790781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.790931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.790959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.791086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.791115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.791269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.791294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.791472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.791501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.791645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.791671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.791802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.791829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.791940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.791966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.792125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.792153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.792298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.792327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.792440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.792470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.792666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.792705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.792851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.792879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.793032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.793078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.793252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.793304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.793428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.793459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.793609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.793635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.793813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.793843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.793963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.793992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.794105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.794134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.794273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.794323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.794462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.794491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.794660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.794689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.794837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.794868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.795011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.795055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.795178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.795223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.795405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.337 [2024-07-11 21:41:04.795449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.337 qpair failed and we were unable to recover it. 00:34:30.337 [2024-07-11 21:41:04.795565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.795592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.795724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.795750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.795906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.795949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.796097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.796127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.796295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.796340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.796498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.796523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.796658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.796684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.796851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.796896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.797076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.797107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.797256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.797285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.797434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.797465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.797693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.797719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.797835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.797863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.798013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.798048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.798183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.798235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.798378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.798407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.798586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.798633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.798760] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.798799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.798940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.798968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.799112] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.799141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.799270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.799319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.799463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.799492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.799606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.799635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.799764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.799790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.799948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.799974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.800121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.800150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.800271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.800297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.800436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.800462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.800598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.800624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.800772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.800799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.800927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.800952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.801131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.801159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.801276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.801324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.801486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.801515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.801668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.801700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.801849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.801876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.802034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.802060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.802192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.802219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.802424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.802472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.802615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.802644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.802771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.802815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.802969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.802995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.338 [2024-07-11 21:41:04.803144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.338 [2024-07-11 21:41:04.803173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.338 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.803344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.803397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.803688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.803751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.803909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.803937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.804099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.804125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.804282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.804311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.804477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.804519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.804629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.804657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.804817] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.804844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.804954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.804982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.805142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.805172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.805285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.805315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.805439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.805469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.805650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.805677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.805823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.805862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.805975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.806002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.806158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.806187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.806391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.806443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.806592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.806622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.806778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.806806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.806907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.806935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.807042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.807069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.807186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.807212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.807404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.807455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.807598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.807627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.807814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.807841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.807960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.808000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.808131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.808177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.808299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.808347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.808503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.808531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.808663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.808689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.808813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.808839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.808974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.809001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.809133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.809160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.809290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.809317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.809447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.809472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.809602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.809628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.809800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.809830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.809965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.810153] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.810284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.810446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.810606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.810739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.810930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.810975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.811115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.811161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.339 qpair failed and we were unable to recover it. 00:34:30.339 [2024-07-11 21:41:04.811261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.339 [2024-07-11 21:41:04.811287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.811430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.811456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.811588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.811614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.811722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.811750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.811935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.811980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.812129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.812173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.812344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.812370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.812504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.812530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.812664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.812690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.812846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.812889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.813045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.813087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.813230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.813273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.813407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.813432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.813537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.813563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.813718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.813745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.813932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.813979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.814101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.814126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.814264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.814290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.814420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.814445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.814565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.814605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.814745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.814779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.814883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.814910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.815052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.815078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.815238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.815269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.815432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.815483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.815634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.815660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.815791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.815819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.815969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.815998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.816111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.816141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.816313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.816342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.816513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.816542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.816701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.816729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.816866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.816920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.817048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.817092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.817242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.817286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.817456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.817483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.817611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.817636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.817773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.817820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.817991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.818020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.818130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.818159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.818295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.818321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.818451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.818477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.818607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.818634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.340 qpair failed and we were unable to recover it. 00:34:30.340 [2024-07-11 21:41:04.818795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.340 [2024-07-11 21:41:04.818824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.818984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.819010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.819130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.819173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.819294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.819337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.819494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.819520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.819652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.819678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.819819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.819847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.819988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.820015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.820120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.820147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.820340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.820392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.820534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.820563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.820724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.820774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.820921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.820951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.821099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.821142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.821270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.821296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.821403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.821429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.821590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.821615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.821725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.821751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.821936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.821982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.822165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.822210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.822353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.822396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.822498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.822523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.822625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.822651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.822815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.822859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.823002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.823045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.823198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.823228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.823385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.823411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.823538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.823564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.823694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.823720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.823849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.823886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.824026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.824068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.824222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.824251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.824397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.824426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.824563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.824591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.824734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.824769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.824965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.824994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.825139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.825167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.341 [2024-07-11 21:41:04.825290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.341 [2024-07-11 21:41:04.825332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.341 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.825520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.825561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.825705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.825731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.825888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.825916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.826055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.826084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.826252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.826280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.826425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.826454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.826595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.826623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.826778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.826805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.826935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.826960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.827076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.827105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.827255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.827283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.827430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.827459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.827602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.827634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.827785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.827813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.827947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.827972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.828125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.828168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.828328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.828354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.828507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.828535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.828711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.828736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.828888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.828927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.829091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.829119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.829246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.829273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.829397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.829427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.829598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.829627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.829746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.829801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.829906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.829933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.830065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.830091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.830221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.830247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.830431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.830477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.830635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.830678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.830823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.830850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.831005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.831049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.831167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.831196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.831363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.831408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.831533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.831559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.831690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.831717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.831878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.831922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.832049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.832093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.832248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.832276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.832418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.832444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.832571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.832596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.832732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.832765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.832912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.832957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.833117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.833142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.342 [2024-07-11 21:41:04.833270] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.342 [2024-07-11 21:41:04.833296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.342 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.833426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.833457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.833619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.833648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.833798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.833829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.833983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.834010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.834194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.834242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.834387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.834417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.834549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.834576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.834707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.834733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.834840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.834867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.834979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.835006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.835129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.835158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.835328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.835357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.835526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.835555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.835701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.835727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.835869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.835896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.836022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.836048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.836208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.836249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.836413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.836442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.836596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.836624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.836749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.836802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.836935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.836962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.837095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.837121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.837275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.837304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.837446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.837475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.837625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.837655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.837810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.837838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.837971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.837997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.838175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.838219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.838339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.838369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.838529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.838574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.838668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.838693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.838789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.838816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.838976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.839002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.839159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.839187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.839349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.839377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.839523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.839549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.839683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.839709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.839837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.839880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.840056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.840098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.840282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.840329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.840465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.840495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.840625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.840650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.840780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.840806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.343 qpair failed and we were unable to recover it. 00:34:30.343 [2024-07-11 21:41:04.840930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.343 [2024-07-11 21:41:04.840972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.841125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.841153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.841321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.841366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.841523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.841549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.841659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.841684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.841847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.841874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.842933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.842958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.843132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.843175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.843272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.843297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.843453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.843478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.843614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.843640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.843775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.843801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.843967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.843993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.844139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.844183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.844313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.844338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.844439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.844465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.844598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.844624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.844763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.844788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.844909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.844939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.845109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.845288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.845428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.845585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.845736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.845873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.845981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.846139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.846305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.846488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.846616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.846751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.846920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.846950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.847108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.847154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.847271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.847314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.847445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.847471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.847575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.847600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.847729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.847765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.847923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.847949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.344 [2024-07-11 21:41:04.848057] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.344 [2024-07-11 21:41:04.848083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.344 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.848243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.848286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.848446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.848471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.848578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.848604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.848729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.848780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.848935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.848979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.849125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.849167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.849329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.849373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.849504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.849530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.849639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.849665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.849796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.849824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.849943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.849986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.850144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.850170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.850326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.850353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.850485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.850511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.850611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.850636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.850835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.850880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.851008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.851040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.851183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.851213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.851384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.851414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.851562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.851592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.851735] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.851772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.851943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.851988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.852113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.852157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.852306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.852352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.852455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.852480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.852638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.852664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.852799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.852825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.852962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.852989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.853147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.853173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.853305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.853331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.853460] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.853486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.853614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.853640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.853771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.853802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.853984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.854041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.854186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.854215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.345 [2024-07-11 21:41:04.854346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.345 [2024-07-11 21:41:04.854374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.345 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.854543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.854570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.854681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.854707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.854841] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.854867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.854996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.855025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.855172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.855201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.855367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.855396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.855505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.855535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.855677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.855706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.855876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.855903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.856053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.856083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.856261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.856290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.856434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.856463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.856638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.856668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.856828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.856856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.856989] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.857015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.857167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.857198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.857317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.857348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.857522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.857551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.857729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.857762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.857898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.857924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.858038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.858064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.858208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.858234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.858370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.858396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.858535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.858562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.858729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.858762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.858895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.858922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.859063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.859090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.859249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.859275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.859425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.859454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.859574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.859605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.859768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.859795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.859899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.859926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.860090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.860116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.860237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.860266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.860414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.860443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.860588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.860617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.860797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.860830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.860939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.860965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.861094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.861138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.861289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.861332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.861454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.861497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.861628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.861655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.861835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.861880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.346 [2024-07-11 21:41:04.862021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.346 [2024-07-11 21:41:04.862064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.346 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.862249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.862293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.862420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.862447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.862620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.862659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.862797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.862826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.862947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.862976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.863092] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.863121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.863272] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.863302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.863423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.863449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.863620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.863646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.863785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.863812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.863917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.863959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.864103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.864132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.864249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.864278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.864396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.864426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.864537] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.864566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.864718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.864744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.864886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.864912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.865049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.865074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.865227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.865253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.865419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.865445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.865592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.865620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.865780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.865807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.865960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.865986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.866163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.866192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.866359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.866387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.866551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.866579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.866720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.866746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.866912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.866937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.867056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.867086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.867219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.867261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.867404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.867432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.867574] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.867602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.867748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.867785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.867892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.867918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.868075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.868103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.868251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.868279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.868397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.868427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.868572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.868601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.868746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.868780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.868899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.868937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.869072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.869099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.869278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.869307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.869421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.869451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.869623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.347 [2024-07-11 21:41:04.869652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.347 qpair failed and we were unable to recover it. 00:34:30.347 [2024-07-11 21:41:04.869777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.869804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.869942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.869970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.870127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.870157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.870350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.870378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.870519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.870547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.870696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.870724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.870873] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.870900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.871959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.871988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.872173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.872202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.872356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.872385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.872532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.872562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.872710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.872760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.872893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.872921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.873058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.873086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.873242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.873268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.873429] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.873485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.873617] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.873648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.873805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.873832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.873988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.874014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.874162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.874206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.874320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.874348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.874472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.874513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.874634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.874663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.874825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.874851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.874981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.875008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.875160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.875188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.875307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.875335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.875452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.875481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.875713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.875778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.875943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.875981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.876129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.876168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.876284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.876312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.876456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.876482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.876650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.876676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.876784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.876810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.876943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.876968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.877097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.877139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.877255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.877283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.877450] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.877478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.877613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.877643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.877798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.877824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.877952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.348 [2024-07-11 21:41:04.877977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.348 qpair failed and we were unable to recover it. 00:34:30.348 [2024-07-11 21:41:04.878147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.878176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.878292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.878322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.878446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.878489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.878659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.878687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.878845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.878871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.878999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.879024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.879156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.879200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.879315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.879347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.879514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.879543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.879660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.879689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.879840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.879880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.880014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.880042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.880179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.880206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.880336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.880364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.880508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.880537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.880679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.880709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.880864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.880890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.881034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.881073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.881275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.881321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.881473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.881523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.881626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.881652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.881771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.881802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.881905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.881932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.882060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.882086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.882186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.882212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.882340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.882366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.882499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.882526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.882669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.882697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.882844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.882875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.883037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.883079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.883198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.883242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.883405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.883453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.883588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.883615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.883775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.883819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.883966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.883995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.884117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.884146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.884268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.349 [2024-07-11 21:41:04.884311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.349 qpair failed and we were unable to recover it. 00:34:30.349 [2024-07-11 21:41:04.884481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.884509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.884631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.884657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.884787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.884813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.884941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.884967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.885122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.885150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.885292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.885321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.885491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.885519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.885660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.885686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.885820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.885846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.885962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.885991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.886133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.886161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.886343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.886395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.886513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.886542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.886659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.886687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.886845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.886871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.886990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.887034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.887188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.887232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.887434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.887488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.887645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.887671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.887799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.887828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.887972] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.888001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.888154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.888180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.888404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.888454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.888571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.888596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.888746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.888795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.888961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.888988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.889111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.889138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.889247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.889274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.889378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.889406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.889511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.889537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.889697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.889723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.889884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.889929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.890082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.890124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.890328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.890378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.890505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.890531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.890662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.890689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.890813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.890842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.890985] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.891036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.891185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.891229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.891397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.891444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.891547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.891573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.891685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.891723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.891900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.891931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.892070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.892099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.892269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.892297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.892509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.350 [2024-07-11 21:41:04.892538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.350 qpair failed and we were unable to recover it. 00:34:30.350 [2024-07-11 21:41:04.892677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.892707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.892880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.892926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.893049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.893094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.893209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.893256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.893389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.893417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.893549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.893575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.893710] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.893736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.893893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.893922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.894061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.894089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.894196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.894224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.894362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.894392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.894514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.894542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.894687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.894713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.894879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.894906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.895095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.895138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.895308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.895357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.895515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.895542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.895682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.895708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.895895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.895934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.896098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.896128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.896273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.896301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.896436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.896464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.896643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.896671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.896830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.896856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.896977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.897005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.897144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.897172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.897325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.897353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.897553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.897599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.897734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.897772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.897921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.897970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.898176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.898217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.898503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.898559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.898702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.898731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.898890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.898915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.899014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.899040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.899174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.899200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.899383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.899412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.899560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.899589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.899722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.899757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.899906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.899932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.900088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.900116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.900312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.900373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.900605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.900654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.900802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.900828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.900961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.900987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.901098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.901124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.351 [2024-07-11 21:41:04.901317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.351 [2024-07-11 21:41:04.901371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.351 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.901514] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.901543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.901661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.901690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.901858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.901897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.902033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.902060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.902219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.902265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.902448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.902492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.902624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.902651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.902810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.902840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.902984] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.903026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.903131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.903157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.903304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.903355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.903492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.903523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.903646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.903685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.903860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.903899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.904067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.904094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.904260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.904315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.904485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.904541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.904668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.904696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.904844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.904874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.904991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.905019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.905167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.905195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.905343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.905392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.905527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.905581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.905751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.905801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.905932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.905957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.906093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.906136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.906306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.906335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.906504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.906532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.906678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.906716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.906870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.906898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.907064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.907123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.907264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.907294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.907463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.907493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.907629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.907669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.907809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.907838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.907962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.907992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.908171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.908197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.908337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.908362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.908498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.908528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.908638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.908664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.908797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.908823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.908954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.908997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.909158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.352 [2024-07-11 21:41:04.909184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.352 qpair failed and we were unable to recover it. 00:34:30.352 [2024-07-11 21:41:04.909320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.909345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.909476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.909501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.909632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.909660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.909798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.909825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.909958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.909985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.910119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.910145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.910252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.910278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.910405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.910432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.910543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.910568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.910702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.910729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.910882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.910925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.911104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.911148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.911372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.911428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.911602] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.911629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.911736] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.911773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.911934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.911961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.912086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.912131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.912258] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.912311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.912510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.912562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.912670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.912697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.912816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.912859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.913016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.913059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.913230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.913289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.913461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.913490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.913611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.913636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.913777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.913803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.913955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.914012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.914170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.914201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.914324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.914366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.914494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.914521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.914671] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.914711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.914844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.914876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.915004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.915033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.915147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.915178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.915374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.915404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.915554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.915616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.915775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.915803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.915928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.915972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.353 [2024-07-11 21:41:04.916147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.353 [2024-07-11 21:41:04.916191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.353 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.916295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.916322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.916484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.916511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.916621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.916648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.916846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.916889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.917035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.917091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.917248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.917301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.917421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.917448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.917583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.917609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.917767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.917812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.917932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.917962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.918144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.918173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.918307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.918360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.918503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.918532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.918691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.918717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.918827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.918854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.918969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.918995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.919107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.919135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.919304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.919334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.919474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.919502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.919623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.919654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.919808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.919837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.919983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.920027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.920177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.920206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.920382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.920412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.920522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.920552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.920698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.920727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.920891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.920918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.921046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.921075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.921290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.921340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.921482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.921511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.921641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.921667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.921810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.921837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.921971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.921997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.922142] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.922171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.922385] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.922435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.922547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.922577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.922747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.922785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.922889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.922916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.923050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.923076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.923247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.923298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.923465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.923494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.923637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.923667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.923850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.923881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.924023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.354 [2024-07-11 21:41:04.924050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.354 qpair failed and we were unable to recover it. 00:34:30.354 [2024-07-11 21:41:04.924174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.924217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.924391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.924418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.924524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.924550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.924685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.924710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.924839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.924867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.925958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.925987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.926155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.926184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.926393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.926423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.926561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.926590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.926698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.926728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.926866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.926895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.927047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.927092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.927262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.927289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.927479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.927531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.927675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.927704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.927833] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.927860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.927986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.928151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.928293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.928440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.928641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.928824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.928959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.928985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.929132] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.929161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.929321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.929351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.929535] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.929585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.929737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.929777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.929880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.929906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.930032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.930061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.930208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.930236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.930413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.930467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.930629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.930658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.930790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.930817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.930950] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.930975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.931134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.931160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.931339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.931368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.931488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.931514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.931653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.931679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.931798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.931827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.931999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.932042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.932200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.932243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.932423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.355 [2024-07-11 21:41:04.932479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.355 qpair failed and we were unable to recover it. 00:34:30.355 [2024-07-11 21:41:04.932604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.932630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.932731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.932762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.932923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.932969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.933111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.933154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.933390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.933440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.933572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.933597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.933722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.933747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.933878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.933907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.934094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.934120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.934236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.934265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.934412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.934438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.934540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.934569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.934703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.934729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.934868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.934894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.935058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.935083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.935221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.935247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.935407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.935433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.935566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.935591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.935720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.935746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.935907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.935954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.936103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.936147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.936277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.936303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.936415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.936441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.936577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.936603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.936733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.936768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.936911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.936955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.937106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.937149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.937304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.937348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.937472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.937498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.937632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.937658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.937785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.937812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.937962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.938006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.938174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.938201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.938361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.938387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.938495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.938522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.938622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.938648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.938825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.938870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.939024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.939066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.939204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.939230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.939366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.939392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.939554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.939580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.939716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.939742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.939897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.939939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.940069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.940113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.940265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.356 [2024-07-11 21:41:04.940291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.356 qpair failed and we were unable to recover it. 00:34:30.356 [2024-07-11 21:41:04.940422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.940448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.940566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.940605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.940743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.940776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.940914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.940940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.941070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.941097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.941234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.941260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.941363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.941395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.941553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.941597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.941769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.941796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.941918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.941967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.942119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.942174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.942318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.942361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.942494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.942519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.942651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.942677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.942809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.942835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.942940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.942965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.943120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.943147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.943290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.943315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.943470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.943495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.943594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.943619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.943730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.943761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.943899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.943924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.944079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.944234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.944391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944527] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.944552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.944711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.944864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.944979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.945010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.945176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.945202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.945309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.945335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.945511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.945540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.945653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.945682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.945821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.945847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.945999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.946027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.946138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.357 [2024-07-11 21:41:04.946168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.357 qpair failed and we were unable to recover it. 00:34:30.357 [2024-07-11 21:41:04.946314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.946343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.946538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.946585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.946718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.946744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.946864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.946891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.947016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.947058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.947234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.947276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.947427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.947471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.947603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.947629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.947739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.947771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.947928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.947972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.948146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.948205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.948374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.948401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.948509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.948534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.948695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.948721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.948861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.948887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.949020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.949045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.949193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.949238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.949383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.949427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.949531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.949557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.949720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.949748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.949884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.949914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.950029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.950059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.950213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.950240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.950374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.950402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.950548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.950576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.950725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.950750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.950917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.950946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.951078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.951107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.951248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.951277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.951412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.951441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.951558] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.951588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.951738] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.951774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.951905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.951931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.952083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.952111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.952349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.952378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.952490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.952520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.952689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.952717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.952892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.952931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.953083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.953128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.953286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.953328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.953570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.953619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.953797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.953826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.954021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.954065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.358 [2024-07-11 21:41:04.954239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.358 [2024-07-11 21:41:04.954287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.358 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.954435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.954483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.954618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.954645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.954779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.954806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.954943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.954969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.955079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.955104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.955256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.955285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.955392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.955425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.955565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.955595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.955743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.955779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.955909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.955935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.956087] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.956116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.956245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.956290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.956434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.956462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.956607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.956636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.956766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.956793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.956949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.956974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.957125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.957153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.957298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.957327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.957466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.957494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.957628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.957656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.957819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.957846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.958003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.958029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.958169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.958212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.958488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.958540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.958712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.958741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.958899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.958925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.959070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.959098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.959242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.959271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.959392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.959420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.959637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.959706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.959854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.959884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.960046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.960088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.960282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.960331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.960515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.960541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.960699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.960725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.960840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.960867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.960998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.961161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.961347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.961480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.961621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.961805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.961966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.961996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.962166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.962195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.359 [2024-07-11 21:41:04.962416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.359 [2024-07-11 21:41:04.962465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.359 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.962614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.962642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.962788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.962834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.962955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.962984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.963104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.963133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.963275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.963304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.963516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.963542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.963643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.963668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.963806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.963834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.963968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.963994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.964149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.964179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.964324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.964352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.964522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.964550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.964659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.964688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.964840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.964867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.965022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.965051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.965195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.965223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.965363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.965391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.965533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.965563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.965682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.965710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.965874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.965900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.966073] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.966122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.966271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.966299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.966442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.966471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.966613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.966645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.966803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.966830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.967011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.967055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.967205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.967247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.967397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.967440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.967600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.967626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.967741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.967774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.967932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.967961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.968163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.968192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.968387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.968432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.968569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.968596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.968740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.968787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.968938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.968968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.969110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.969139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.969281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.969311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.969532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.969580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.969747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.969798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.969930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.969960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.970127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.970170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.970335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.360 [2024-07-11 21:41:04.970361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.360 qpair failed and we were unable to recover it. 00:34:30.360 [2024-07-11 21:41:04.970494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.970521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.970655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.970681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.970810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.970839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.971044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.971214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.971379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.971513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.971667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.971812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.971961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.972004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.972184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.972231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.972360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.972385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.972516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.972547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.972677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.972703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.972879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.972923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.973021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.973048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.973213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.973240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.973340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.973366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.973493] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.973519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.973680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.973706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.973855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.973882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.974013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.974039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.974150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.974176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.974310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.974336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.974495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.974521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.974624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.974651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.974853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.974899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.975044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.975074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.975244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.975289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.975421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.975447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.975575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.975602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.975724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.975751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.975890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.975916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.976049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.976077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.976210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.976235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.976365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.976391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.976540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.976565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.976690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.361 [2024-07-11 21:41:04.976715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.361 qpair failed and we were unable to recover it. 00:34:30.361 [2024-07-11 21:41:04.976865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.976896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.977097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.977142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.977289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.977333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.977444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.977470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.977570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.977596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.977724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.977750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.977918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.977944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.978050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.978076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.978235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.978261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.978394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.978420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.978577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.978603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.978731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.978770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.978934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.978961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.979109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.979153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.979261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.979292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.979399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.979425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.979587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.979616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.979751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.979784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.979951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.979979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.980120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.980149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.980295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.980323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.980457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.980485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.980662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.980689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.980810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.980856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.981014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.981056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.981202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.981244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.981421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.981465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.981622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.981648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.981804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.981834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.982006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.982035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.982197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.982223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.982342] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.982370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.982489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.982517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.982688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.982717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.982877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.982904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.983061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.983105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.983253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.983297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.983447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.983508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.983688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.983714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.983851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.983897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.984054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.984079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.984217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.984246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.984377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.984404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.984540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.984568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.984704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.984731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.984849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.984875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.985037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.362 [2024-07-11 21:41:04.985064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.362 qpair failed and we were unable to recover it. 00:34:30.362 [2024-07-11 21:41:04.985199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.985225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.985383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.985409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.985567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.985593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.985758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.985784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.985883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.985909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.986056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.986102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.986220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.986250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.986452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.986500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.986615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.986640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.986770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.986797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.986934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.986959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.987058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.987216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.987346] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.987528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 1063180 Killed "${NVMF_APP[@]}" "$@" 00:34:30.363 [2024-07-11 21:41:04.987666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.987811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.987966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.987992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.988193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.988237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:34:30.363 [2024-07-11 21:41:04.988397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.988424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.988564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.988593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:34:30.363 [2024-07-11 21:41:04.988723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.988749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.988912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:34:30.363 [2024-07-11 21:41:04.988955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.989120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.989166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:30.363 [2024-07-11 21:41:04.989299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.989325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.989458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.989485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.363 [2024-07-11 21:41:04.989618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.989645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.989805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.989830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.989936] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.989962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.990076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.990102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.990233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.990260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.990367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.990398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.990530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.990556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.990691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.990718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.990887] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.990912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.991024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.991049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.991209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.991235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.991393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.991419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.991554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.991579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.991688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.991714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.991901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.363 [2024-07-11 21:41:04.991945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.363 qpair failed and we were unable to recover it. 00:34:30.363 [2024-07-11 21:41:04.992099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.992140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.992289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.992334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.992464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.992491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.992592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.992618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.992759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.992786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.992920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.992948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.993130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.993158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.993320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.993347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.993483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.993509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.993641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.993667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.993806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.993832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.993966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.993994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.994094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.994120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.994277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.994303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.994424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.994449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.994607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.994632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.994774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.994801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.994938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.994964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.995117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.995160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.995260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=1063727 00:34:30.364 [2024-07-11 21:41:04.995285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 1063727 00:34:30.364 [2024-07-11 21:41:04.995447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.995473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.995630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.995656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 1063727 ']' 00:34:30.364 [2024-07-11 21:41:04.995768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.995804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:30.364 [2024-07-11 21:41:04.995953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.995998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:30.364 [2024-07-11 21:41:04.996174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:30.364 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:30.364 [2024-07-11 21:41:04.996222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:30.364 [2024-07-11 21:41:04.996380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.996406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 21:41:04 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.364 [2024-07-11 21:41:04.996534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.996562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.996702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.996727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.996884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.996927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.997086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.997116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.997273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.997318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.997454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.997479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.997621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.997649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.997765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.997791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.997925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.997950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.998063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.998090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.998195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.998221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.998353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.998380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.998511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.998536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.998693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.998723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.998839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.998867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.999026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.999052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.364 qpair failed and we were unable to recover it. 00:34:30.364 [2024-07-11 21:41:04.999163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.364 [2024-07-11 21:41:04.999189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:04.999303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:04.999330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:04.999431] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:04.999457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:04.999564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:04.999590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:04.999725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:04.999756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:04.999859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:04.999887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.000901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.000928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.001930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.001957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002504] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.002963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.002990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003352] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.003909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.003935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.365 qpair failed and we were unable to recover it. 00:34:30.365 [2024-07-11 21:41:05.004036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.365 [2024-07-11 21:41:05.004063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.004197] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.004222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.004324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.004350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.004457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.004489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.004596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.004623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.004778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.004805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.004932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.004977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.005110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.005136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.005267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.005293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.005401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.005428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.005530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.005556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.005689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.005714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.005826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.005851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.006929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.006954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.007953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.007979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.008961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.008987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009256] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.366 qpair failed and we were unable to recover it. 00:34:30.366 [2024-07-11 21:41:05.009956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.366 [2024-07-11 21:41:05.009981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.010955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.010980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.011083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.011108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.011238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.011263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.011387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.011413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.011553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.011579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.011716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.011742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.011860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.011886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012780] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.012927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.012956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.013114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.013139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.013267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.013293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.013428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.013454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.013556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.013582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.013712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.013738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.013894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.013937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.014096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.014140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.014268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.014295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.014422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.014449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.014561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.014588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.014733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.014765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.014925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.014951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.015089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.015115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.015244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.015288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.015417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.015443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.015570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.015596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.015725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.015756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.015871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.015897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.016025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.016051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.016177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.016203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.016359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.016386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.016515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.367 [2024-07-11 21:41:05.016543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.367 qpair failed and we were unable to recover it. 00:34:30.367 [2024-07-11 21:41:05.016675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.016706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.016903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.016949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.017103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.017146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.017266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.017295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.017449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.017475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.017583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.017609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.017767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.017811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.017918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.017944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.018100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018260] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.018286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.018445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.018587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018697] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.018723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.018861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.018996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.019882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.019992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.020181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.020306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.020436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.020596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.020763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.020949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.020989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.021129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.021157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.021253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.021279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.021396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.021422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.021526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.021555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.021695] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.021722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.021875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.021920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.022068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.022113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.022233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.022263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.022415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.022442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.022575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.022601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.022708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.022733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.022885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.022929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.023095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.023126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.023279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.368 [2024-07-11 21:41:05.023308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.368 qpair failed and we were unable to recover it. 00:34:30.368 [2024-07-11 21:41:05.023479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.023505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.023633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.023659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.023813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.023843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.023979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.024105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.024266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.024424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.024588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.024712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.024894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.024922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025245] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.025933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.025958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.026068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.026094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.026230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.026257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.026366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.026392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.026498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.026524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.026660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.026685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.026838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.026882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.027011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.027037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.027269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.027319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.027434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.027460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.027568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.027594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.027701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.027727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.027880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.027906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.028890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.028916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.369 [2024-07-11 21:41:05.029893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.369 [2024-07-11 21:41:05.029918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.369 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030439] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.030935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.030960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.031096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.031121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.031252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.031277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.031421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.031446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.031573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.031599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.031759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.031785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.031898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.031925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.032072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.032115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.032355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.032405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.032512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.032538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.032673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.032700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.032806] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.032831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.032968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.032993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.033122] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.033147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.033279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.033306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.033438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.033465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.033600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.033626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.033764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.033790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.033911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.033955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.034103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.034317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.034447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034554] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.034581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.034710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.370 [2024-07-11 21:41:05.034881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.370 qpair failed and we were unable to recover it. 00:34:30.370 [2024-07-11 21:41:05.034992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.035124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.035255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.035410] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.035542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.035702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.035898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.035923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.036884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.036931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.037046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.037074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.037191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.037219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.037354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.037381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.037497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.037537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.037704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.037731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.037860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.037888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.038038] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.038081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.038204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.038232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.038369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.038415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.038553] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.038583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.038719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.038745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.038902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.038945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.039117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.039161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.039281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.039325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.039445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.039474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.039593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.039620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.039730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.039762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.039919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.039967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.040085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.040130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.040310] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.040354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.040483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.040509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.040650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.040676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.040811] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.040838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.040990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.041043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.041219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.041249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.041408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.041434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.371 [2024-07-11 21:41:05.041590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.371 [2024-07-11 21:41:05.041615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.371 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.041711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.041737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.041880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.041924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.042059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.042101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.042251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.042294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.042428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.042454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.042610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.042636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.042745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.042775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.042885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.042910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043005] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:30.372 [2024-07-11 21:41:05.043062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043094] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:30.372 [2024-07-11 21:41:05.043171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.043935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.043960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.044934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.044960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.045098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.045125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.045263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.045289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.045424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.045450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.045560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.045587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.045704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.045747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.045893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.045923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.046056] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.046095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.046273] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.046357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.046487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.046514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.046646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.046672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.046803] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.046832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.047008] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.047052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.047205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.047249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.047392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.047438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.047569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.047597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.047698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.047724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.047855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.372 [2024-07-11 21:41:05.047895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.372 qpair failed and we were unable to recover it. 00:34:30.372 [2024-07-11 21:41:05.048019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.048057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.048193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.048220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.048369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.048400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.048593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.048646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.048845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.048873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.048981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.049013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.049155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.049182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.049296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.049323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.049462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.049507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.049640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.049667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.049825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.049871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.050000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.050029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.050209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.050256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.050386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.050413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.050541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.050567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.050696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.050723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.050846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.050876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.051028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.051078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.051229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.051275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.051408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.051433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.051592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.051618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.051723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.051749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.051880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.051909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.052092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.052285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.052413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.052561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.052692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.052864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.052999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.053026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.053140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.053171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.053299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.053333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.053462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.053509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.053655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.053681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.053809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.053838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.054004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.054033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.373 [2024-07-11 21:41:05.054205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.373 [2024-07-11 21:41:05.054234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.373 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.054383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.054426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.054528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.054556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.054692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.054720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.054857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.054886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.055041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.055077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.055211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.055240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.055359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.055390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.055509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.055538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.055680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.055707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.055818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.055845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.056003] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.056032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.056144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.056173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.056318] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.056348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.056491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.056520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.056699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.056729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.056885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.056922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.057090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.057120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.057289] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.057319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.057462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.057493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.057653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.057681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.057829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.057864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.058054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.058081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.058209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.058251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.058373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.058418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.058544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.058570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.058704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.058729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.058866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.058893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.059059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.059211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.059384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.059524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.059667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.059860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.059997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.060041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.060172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.060202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.060324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.060354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.060531] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.374 [2024-07-11 21:41:05.060560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.374 qpair failed and we were unable to recover it. 00:34:30.374 [2024-07-11 21:41:05.060722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.060749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.060901] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.060928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.061070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.061105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.061282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.061312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.061459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.061488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.061601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.061633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.061798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.061825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.061958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.061986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.062121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.062150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.062283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.062329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.062469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.062499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.062638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.062669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.062861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.062893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.063005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.063050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.063162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.063195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.063369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.063411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.063579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.063615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.063740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.063796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.063932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.063959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.064121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.064150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.064308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.064338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.064490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.064522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.064668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.064698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.064850] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.064877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.065033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.065062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.065205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.065234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.065355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.065384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.065546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.065603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.065717] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.065764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.065874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.065901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.066897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.066941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.067041] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.067071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.067205] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.067230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.067377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.067406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.067548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.375 [2024-07-11 21:41:05.067574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.375 qpair failed and we were unable to recover it. 00:34:30.375 [2024-07-11 21:41:05.067696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.067723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.067870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.067898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.068959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.068997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.069134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.069160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.069304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.069335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.069491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.069521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.069707] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.069736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.069920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.069966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.070086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.070119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.070285] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.070329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.070462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.070489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.070641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.070667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.070768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.070796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.070921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.070967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.071150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.071204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.071335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.071362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.071466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.071496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.071613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.071653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.071769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.071797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.071948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.071974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.072129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.072158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.376 qpair failed and we were unable to recover it. 00:34:30.376 [2024-07-11 21:41:05.072302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.376 [2024-07-11 21:41:05.072340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.072501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.072535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.072703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.072731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.072870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.072896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.073051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.073080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.073324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.073371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.073469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.073499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.073609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.073636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.073773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.073800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.073934] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.073966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.074076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.650 [2024-07-11 21:41:05.074103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.650 qpair failed and we were unable to recover it. 00:34:30.650 [2024-07-11 21:41:05.074269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.074296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.074435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.074461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.074564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.074591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.074746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.074779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.074937] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.074962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.075090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.075116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.075218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.075247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.075401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.075447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.075579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.075606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.075769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.075800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.075993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.076038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.076225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.076269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.076409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.076436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.076546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.076572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.076722] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.076769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.076919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.076951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.077065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.077094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.077212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.077241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.077409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.077455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.077607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.077632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.077768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.077794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.077891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.077916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.078044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.078070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.078236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.078264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.078419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.078473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.078601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.078631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.078775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.078819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.078948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.078977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.079123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.079151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.079268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.079297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.079419] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.079466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.079605] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.079645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.079771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.079801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.079940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.079967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.080090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.080120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.080267] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.080298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.080470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.080499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 EAL: No free 2048 kB hugepages reported on node 1 00:34:30.651 [2024-07-11 21:41:05.080659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.080688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.080804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.080835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.080965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.081009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.081124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.081171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.081302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.081350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.081501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.081527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.081657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.081683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.081829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.081868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.081979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.082006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.082169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.082196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.082304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.082331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.082433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.082459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.082613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.082639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.082749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.082785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.082993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.083019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.083241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.083289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.083423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.083466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.083645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.083672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.651 [2024-07-11 21:41:05.083805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.651 [2024-07-11 21:41:05.083832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.651 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.083968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.083994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.084141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.084166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.084294] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.084320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.084457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.084482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.084611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.084637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.084807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.084847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.084973] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.085140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.085306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.085484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.085691] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.085827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.085963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.085990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.086130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.086156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.086268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.086294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.086428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.086453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.086607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.086647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.086796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.086824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.086960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.086986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.087100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.087128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.087263] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.087289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.087421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.087448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.087552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.087584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.087764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.087803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.087930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.087969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.088093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.088121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.088228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.088255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.088392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.088420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.088581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.088608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.088740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.088773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.088913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.088939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.089958] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.089984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.090113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.090138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.090295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.090321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.090456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.090485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.090622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.090649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.090759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.090789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.090926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.090952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.091101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.091139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.091275] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.091303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.091402] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.091429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.091536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.091562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.091667] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.652 [2024-07-11 21:41:05.091693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.652 qpair failed and we were unable to recover it. 00:34:30.652 [2024-07-11 21:41:05.091830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.091861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.091993] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.092173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.092333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.092458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.092616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.092781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.092942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.092968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.093099] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.093125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.093252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.093278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.093386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.093411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.093536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.093562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.093694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.093720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.093849] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.093889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.094929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.094957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.095070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.095096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.095249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.095275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.095380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.095406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.095533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.095558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.095685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.095712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.095885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.095925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096651] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.096971] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.096998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.097136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.097163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.097269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.097296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.097426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.097452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.097543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.097569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.097673] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.097702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.097842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.097874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.098057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.098189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.098326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.098481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.098646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.098865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.098977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.099134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.099295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.099459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.099588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.099745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.099906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.099931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.653 [2024-07-11 21:41:05.100084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.653 [2024-07-11 21:41:05.100109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.653 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.100221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.100246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.100344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.100370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.100470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.100496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.100600] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.100626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.100745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.100790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.100927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.100954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.101120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.101285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.101412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.101566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.101735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101840] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.101865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.101997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.102150] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.102324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.102462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.102612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.102781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.102963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.102990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.103149] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.103175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.103309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.103335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.103470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.103497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.103657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.103684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.103812] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.103839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.103942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.103968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.104079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.104105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.104234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.104260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.104397] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.104423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.104551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.104576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.104705] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.104730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.104869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.104897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.105920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.105947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.106078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.106233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.106393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.106532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.106684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.106873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.106988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.107013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.107145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.107170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.107277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.107302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.107400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.107426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.107526] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.654 [2024-07-11 21:41:05.107551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.654 qpair failed and we were unable to recover it. 00:34:30.654 [2024-07-11 21:41:05.107654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.107680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.107814] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.107841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.107953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.107992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.108095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.108121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.108254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.108280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.108391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.108417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.108534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.108572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.108709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.108736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.108848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.108875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.109920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.109946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.110106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.110131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.110232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.110257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.110433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.110460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.110569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.110596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.110737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.110774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.110908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.110934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.111947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.111972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.112103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.112129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.112238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.112264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.112365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.112396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.112539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.112579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.112725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.112771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.112912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.112939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113464] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113808] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.113944] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.113970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.114083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.114118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.114250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.114276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.114407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.114433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.114568] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.114594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.114724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.114751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.114892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.114918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.115025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.115048] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:30.655 [2024-07-11 21:41:05.115059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.115216] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.115242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.115377] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.115404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.115517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.115543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.115653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.115680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.115837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.115863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.116021] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.655 [2024-07-11 21:41:05.116047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.655 qpair failed and we were unable to recover it. 00:34:30.655 [2024-07-11 21:41:05.116179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.116205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.116313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.116339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.116444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.116470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.116635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.116661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.116768] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.116795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.116928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.116954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.117061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.117100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.117239] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.117267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.117427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.117453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.117589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.117616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.117721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.117764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.117883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.117923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.118068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.118095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.118229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.118257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.118391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.118418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.118575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.118601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.118758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.118791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.118924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.118951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.119054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.119081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.119227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.119253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.119391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.119417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.119577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.119604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.119744] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.119779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.119917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.119949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.120107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.120145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.120281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.120308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.120420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.120447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.120570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.120597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.120702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.120728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.120884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.120911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.121052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.121215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.121379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.121538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.121670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.121855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.121991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.122018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.122159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.122185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.122324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.122349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.122476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.122502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.122632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.122658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.122820] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.122847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.123926] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.123951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.124084] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.124110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.124243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.124269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.124408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.124436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.124575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.124602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.124725] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.124780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.124897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.124926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.125029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.125060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.656 [2024-07-11 21:41:05.125193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.656 [2024-07-11 21:41:05.125224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.656 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.125360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.125386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.125522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.125549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.125655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.125683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.125826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.125855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.126002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.126041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.126204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.126231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.126361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.126388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.126519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.126545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.126677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.126704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.126871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.126900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.127011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.127039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.127157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.127183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.127345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.127372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.127513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.127542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.127690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.127729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.127877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.127905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.128011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.128037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.128179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.128205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.128358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.128385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.128516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.128542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.128656] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.128696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.128839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.128878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129002] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.129029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.129205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.129365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.129498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.129667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.129845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.129976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.130139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.130291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.130421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.130610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.130774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.130915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.130942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.131067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.131106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.131248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.131276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.131388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.131414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.131549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.131576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.131731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.131776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.131891] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.131919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.132052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.132191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.132350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.132545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.132682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.657 [2024-07-11 21:41:05.132867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.657 qpair failed and we were unable to recover it. 00:34:30.657 [2024-07-11 21:41:05.132998] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.133164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.133320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.133455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.133616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.133789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.133930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.133957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134661] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.134932] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.134959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135072] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.135941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.135968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.136081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.136108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.136251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.136277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.136401] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.136441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.136581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.136610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.136742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.136777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.136908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.136935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.137120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.137255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.137417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.137576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.137712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.137850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.137988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.138124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.138293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.138472] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.138628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.138805] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.138941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.138968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.139110] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.139136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.139268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.139296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.139423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.139451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.139586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.139613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.139745] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.139779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.139942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.139967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.140105] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.140145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.140283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.140310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.140444] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.140471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.140579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.140606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.140730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.140782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.140910] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.140936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.141070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.141097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.141243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.141270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.141379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.141405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.658 [2024-07-11 21:41:05.141522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.658 [2024-07-11 21:41:05.141561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.658 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.141714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.141749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.141868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.141895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142615] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.142898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.142924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.143946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.143986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.144135] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.144163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.144292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.144332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.144500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.144527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.144662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.144687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.144802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.144829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.144935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.144962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.145134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.145174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.145312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.145340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.145442] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.145469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.145603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.145629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.145765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.145792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.145902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.145928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146221] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.146965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.146991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.147115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.147141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.147242] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.147269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.147366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.147392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.147519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.147545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.147681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.147706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.147853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.147892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.148058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.148226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.148351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148478] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.148518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.148713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.148857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.148990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.149017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.149131] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.149157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.149290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.149316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.149446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.149472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.149585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.149612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.149743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.659 [2024-07-11 21:41:05.149778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.659 qpair failed and we were unable to recover it. 00:34:30.659 [2024-07-11 21:41:05.149879] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.149905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.150031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.150187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.150324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.150452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.150669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.150838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.150983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151191] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151580] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.151890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.151994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.152125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.152312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.152498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.152640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.152804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.152941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.152967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.153082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.153109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.153237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.153262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.153396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.153422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.153579] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.153604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.153711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.153738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.153905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.153932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.154064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.154090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.154218] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.154244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.154348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.154375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.154509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.154535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.154664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.154695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.154836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.154865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.155954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.155980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.156113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.156139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.156240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.660 [2024-07-11 21:41:05.156265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.660 qpair failed and we were unable to recover it. 00:34:30.660 [2024-07-11 21:41:05.156394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.156420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.156550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.156576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.156714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.156743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.156867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.156894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.157062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.157190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.157359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.157494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157636] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.157662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.157823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.157987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.158119] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.158286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.158415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.158559] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.158766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.158925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.158959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.159134] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.159169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.159312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.159338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.159441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.159468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.159624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.159650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.159801] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.159841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.159960] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.160145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.160306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.160468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.160622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.160779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.160914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.160940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.161097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.161229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.161365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.161556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161698] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.161726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.161876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.161987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.162156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.162291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.162420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.162581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.162732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.162877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.162903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163463] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.163920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.163947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.164090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.164115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.164228] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.164253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.164413] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.164439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.164556] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.164596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.164704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.164732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.164893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.164920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.165027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.165056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.661 [2024-07-11 21:41:05.165151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.661 [2024-07-11 21:41:05.165183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.661 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.165296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.165324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.165423] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.165451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.165618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.165644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.165786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.165815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.165947] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.165974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.166090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.166115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.166229] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.166257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.166389] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.166415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.166544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.166570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.166701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.166727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.166882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.166908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.167056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167187] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.167213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.167375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.167528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167653] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.167679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167825] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.167852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.167979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168114] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168677] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168830] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.168963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.168991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.169136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.169162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.169261] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.169292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.169392] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.169418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.169560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.169587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.169778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.169826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.169994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.170140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.170284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.170418] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.170547] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.170694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.170839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.170868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.171043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.171070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.171177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.171205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.171354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.171381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.171522] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.171555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.171684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.171712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.171883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.171910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.172046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.172073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.172196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.172235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.172373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.172400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.172532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.172558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.172693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.172719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.172852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.172892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.173044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.173083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.173201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.173227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.173381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.173408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.173533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.173560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.173684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.662 [2024-07-11 21:41:05.173723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.662 qpair failed and we were unable to recover it. 00:34:30.662 [2024-07-11 21:41:05.173884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.173913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.174055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.174081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.174184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.174212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.174347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.174374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.174513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.174542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.174655] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.174695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.174845] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.174885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.175264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175379] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.175406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.175540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.175708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.175857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.175979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.176146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.176286] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.176422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.176586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.176712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.176880] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.176907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177037] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177500] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.177967] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.177994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.178167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.178194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.178301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.178328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.178459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.178485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.178595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.178624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.178765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.178792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.178929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.178955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.179078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.179127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.179276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.179304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.179420] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.179448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.179586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.179614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.179728] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.179787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.179904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.179932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.180096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.180138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.180251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.180283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.180391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.180418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.180560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.180599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.180700] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.663 [2024-07-11 21:41:05.180727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.663 qpair failed and we were unable to recover it. 00:34:30.663 [2024-07-11 21:41:05.180872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.180912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.181060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181209] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.181236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.181400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.181562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181666] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.181693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181857] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.181885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.181992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.182126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.182253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.182386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.182557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.182730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.182877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.182904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183764] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.183908] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.183935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184369] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184508] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184635] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.184913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.184940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.185046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.185080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.185207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.185234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.185344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.185372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.185507] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.185534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.185665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.185691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.185839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.185879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186048] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186182] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.186942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.186969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.187089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.187126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.187237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.187264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.187370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.187396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.187530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.187687] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.187713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.187858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.187887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188366] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.188953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.188981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.189123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.189149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.189255] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.189282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.664 [2024-07-11 21:41:05.189394] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.664 [2024-07-11 21:41:05.189421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.664 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.189533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.189560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.189659] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.189687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.189826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.189854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.189986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.190152] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.190344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.190482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.190619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.190750] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.190893] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.190919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.191026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.191052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.191215] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.191243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.191391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.191418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.191550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.191577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.191712] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.191739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.191858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.191885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.192020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.192047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.192204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.192231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.192364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.192394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.192511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.192538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.192702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.192737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.192878] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.192905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.193033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.193060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.193227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.193254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.193415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.193441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.193577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.193604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.193729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.193773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.193904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.193932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194088] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.194955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.194982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.195116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.195144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.195248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.195274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.195384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.195410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.195542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.195570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.195699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.195727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.195883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.195923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.196090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.196257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.196397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.196570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.196708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.196883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.196991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.197130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.197323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.197470] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.197639] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.197794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.197922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.665 [2024-07-11 21:41:05.197949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.665 qpair failed and we were unable to recover it. 00:34:30.665 [2024-07-11 21:41:05.198050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.198082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.198225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.198252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.198378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.198405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.198536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.198562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.198704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.198749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.198905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.198934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.199066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.199092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.199254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.199280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.199446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.199472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.199587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.199612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.199714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.199740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.199860] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.199886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.200019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.200056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.200204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.200231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.200328] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.200353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.200510] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.200536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.200676] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.200716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.200907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.200947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.201104] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.201143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.201290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.201318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.201473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.201499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.201608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.201634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.201739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.201776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.201881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.201907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202005] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202590] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.202892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.202917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.203925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.203951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.204912] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.204940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.205885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.205912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.666 qpair failed and we were unable to recover it. 00:34:30.666 [2024-07-11 21:41:05.206013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.666 [2024-07-11 21:41:05.206039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.206131] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:30.667 [2024-07-11 21:41:05.206165] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:30.667 [2024-07-11 21:41:05.206180] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:30.667 [2024-07-11 21:41:05.206180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.206193] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:30.667 [2024-07-11 21:41:05.206205] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:30.667 [2024-07-11 21:41:05.206207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.206330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.206359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.206517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.206484] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:34:30.667 [2024-07-11 21:41:05.206544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.206520] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:34:30.667 [2024-07-11 21:41:05.206567] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:34:30.667 [2024-07-11 21:41:05.206570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:34:30.667 [2024-07-11 21:41:05.206685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.206711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.206836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.206863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.206997] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.207161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.207333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.207495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.207660] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.207823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.207966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.207993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.208138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.208165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.208300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.208326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.208426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.208451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.208592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.208624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.208771] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.208799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.208900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.208927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209382] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209681] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.209968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.209996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.210166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.210304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.210439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.210567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.210710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.210861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.210996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.211125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.211288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.211428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.211572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.211729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.211882] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.211909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.212013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.212039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.212151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.212177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.212317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.212343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.212447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.212474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.667 [2024-07-11 21:41:05.212585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.667 [2024-07-11 21:41:05.212615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.667 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.212762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.212790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.212894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.212920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213483] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.213939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.213966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.214092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.214217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.214352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.214517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214668] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.214695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.214840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.214975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215709] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.215874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.215986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216407] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216539] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.216887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.216992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.217161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.217303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.217458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.217611] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.217777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.217923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.217949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218046] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218186] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.218885] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.218912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219016] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.219044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219184] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.219210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.219403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.219548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.219685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.219856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.219983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.220158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.220293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.220428] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.220612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.220781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.220938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.220966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.668 [2024-07-11 21:41:05.221129] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.668 [2024-07-11 21:41:05.221155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.668 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.221288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.221315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.221415] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.221441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.221541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.221568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.221708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.221735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.221848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.221875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.221974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.222148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.222306] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.222449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.222596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.222762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.222902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.222928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223061] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223630] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.223927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.223954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224210] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224348] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224501] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224634] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224785] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.224919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.224945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.225094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.225127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.225238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.225264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.225398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.225424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.225565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.225599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.225729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.225781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.225917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.225944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226227] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.226897] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.226923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227020] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227838] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.227965] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.227992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.228143] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.228170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.228281] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.228306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.228454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.228481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.228581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.228607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.228757] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.228787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.228898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.228925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.669 [2024-07-11 21:41:05.229023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.669 [2024-07-11 21:41:05.229050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.669 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.229156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.229182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.229339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.229365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.229467] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.229494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.229632] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.229658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.229765] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.229792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.229889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.229915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.230925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.230952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.231078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.231104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.231246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.231272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.231372] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.231398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.231530] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.231556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.231696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.231722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.231884] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.231927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.232083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.232227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.232391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232538] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.232564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.232696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232821] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.232853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.232986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.233128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.233252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.233417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.233585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.233713] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.233866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.233892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234475] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234777] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.234915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.234943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.235089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.235264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235367] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.235393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.235532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.235707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.235851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.670 qpair failed and we were unable to recover it. 00:34:30.670 [2024-07-11 21:41:05.235988] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.670 [2024-07-11 21:41:05.236014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.236148] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.236174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.236300] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.236326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.236426] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.236454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.236562] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.236589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.236733] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.236776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.236895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.236921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237311] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237581] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.237890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.237994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.238158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.238340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.238479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.238637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.238767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.238935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.238962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239124] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239383] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239525] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239824] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.239957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.239983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240098] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240231] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240798] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.240970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.240995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.241132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241240] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.241266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.241401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.241567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.241705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241823] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.241849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.241975] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.242116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.242304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.242430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.242563] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.242699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.242886] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.242916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243015] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243211] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243466] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.243895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.243920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.671 qpair failed and we were unable to recover it. 00:34:30.671 [2024-07-11 21:41:05.244033] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.671 [2024-07-11 21:41:05.244059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.244168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.244193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.244326] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.244352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.244452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.244478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.244606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.244632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.244770] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.244797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.244919] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.244944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245078] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245541] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245816] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.245946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.245972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.246142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246250] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.246278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.246402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.246563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246683] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.246711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.246868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.246978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.247159] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.247319] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.247451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.247607] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.247734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.247915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.247941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248238] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.248931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.248958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249067] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249336] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249772] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.249940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.249966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.250119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.250272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.250399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250499] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.250525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250690] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.250715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.250874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.250983] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.251164] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.251305] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.251430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.251565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.251727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.251902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.251929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.252091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.252117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.252213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.252239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.252343] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.672 [2024-07-11 21:41:05.252370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.672 qpair failed and we were unable to recover it. 00:34:30.672 [2024-07-11 21:41:05.252513] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.252539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.252674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.252700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.252810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.252836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.252956] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.252983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253347] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253471] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.253913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.253939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254488] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.254951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.254977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.255111] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.255137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.255243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.255269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.255414] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.255455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.255578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.255608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.255747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.255779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.255914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.255940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256079] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256370] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.256940] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.256966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257353] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257797] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.257927] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.257953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.258094] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.258122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.258237] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.258264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.258364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.258390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.258550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.258576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.258682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.258708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.258842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.258883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259017] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259303] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259432] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259796] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.673 [2024-07-11 21:41:05.259922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.673 [2024-07-11 21:41:05.259948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.673 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260719] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.260895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.260996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261409] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261670] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.261834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.261990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.262130] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.262251] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.262435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.262569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.262699] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.262858] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.262898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263024] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.263061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.263222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.263376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263484] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.263510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.263677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263836] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.263863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.263977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.264146] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.264309] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.264445] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.264585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.264763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.264921] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.264947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265050] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265616] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.265931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.265957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266320] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.266970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.266998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.267120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.267146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.267246] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.267272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.267374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.267401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.267533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.267559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.267669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.674 [2024-07-11 21:41:05.267696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.674 qpair failed and we were unable to recover it. 00:34:30.674 [2024-07-11 21:41:05.267822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.267848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.267987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.268013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.268156] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.268182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.268283] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.268309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.268441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.268466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.268694] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.268734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.268863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.268902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269035] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.269906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.269932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.270069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.270095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.270232] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.270257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.270362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.270388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.270489] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.270515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.270628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.270653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.270789] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.270815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.273853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.273895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274603] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.274918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.274944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275314] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275751] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.275930] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.275956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276839] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.276957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.276984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.277143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.277276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277373] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.277400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.277531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.277689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.277861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.277974] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.278151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.278284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.278422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.278577] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.278703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.278855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.278897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.279012] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.279040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.675 [2024-07-11 21:41:05.279176] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.675 [2024-07-11 21:41:05.279202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.675 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.279339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.279366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.279477] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.279503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.279598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.279624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.279727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.279762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.279905] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.279932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280292] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280587] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280729] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280865] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.280892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.280992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.281126] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.281259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.281448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.281612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.281774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.281903] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.281929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282497] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282633] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282795] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.282922] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.282948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283047] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.283073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.283227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.283367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.283530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.283692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.283853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.283987] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.284116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.284279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.284411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.284548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.284708] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.284852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.284879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.285943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.285971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.286066] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.286093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.286226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.286252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.286391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.286417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.676 [2024-07-11 21:41:05.286524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.676 [2024-07-11 21:41:05.286552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.676 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.286662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.286688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.286793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.286838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.286996] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.287128] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.287259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.287417] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.287544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.287702] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.287870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.287898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288171] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288454] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288589] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.288889] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.288914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289147] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289533] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289686] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.289854] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.289886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290474] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.290906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.290933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291623] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.291942] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.291968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292070] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292662] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.292931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.292957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293350] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.293902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.677 [2024-07-11 21:41:05.293928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.677 qpair failed and we were unable to recover it. 00:34:30.677 [2024-07-11 21:41:05.294025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.294179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.294313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.294440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.294572] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.294766] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.294900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.294927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295036] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295609] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295740] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.295883] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.295909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296458] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296591] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.296852] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.296879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297169] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297315] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297565] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.297955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.297982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.298093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.298121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.298224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.298252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.298412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.298439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.298575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.298603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.298739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.298773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.298876] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.298902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299220] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299357] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299621] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299786] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.299952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.299978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300505] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300813] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.300951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.300979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.301116] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.301142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.301304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.301330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.301457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.301485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.301588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.678 [2024-07-11 21:41:05.301614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.678 qpair failed and we were unable to recover it. 00:34:30.678 [2024-07-11 21:41:05.301743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.301774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.301904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.301930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302183] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302340] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302606] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302742] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.302916] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.302942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303060] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.303090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303223] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.303250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303376] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.303403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303503] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.303528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303689] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.303715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.303856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.303992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304529] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304822] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.304955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.304982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305226] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.305935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.305962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306097] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306517] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306649] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.306931] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.306962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307071] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307480] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307774] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.307907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.307933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308196] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308487] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308626] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308758] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.679 [2024-07-11 21:41:05.308924] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.679 [2024-07-11 21:41:05.308951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.679 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309249] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309548] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309829] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.309957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.309983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310388] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310837] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.310961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.310987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.311117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.311143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.311254] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.311280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.311416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.311443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.311546] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.311572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.311678] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.311704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.311834] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.311874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312151] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.312870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.312895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313000] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313259] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313391] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313810] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.313938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.313964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314093] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.314119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314257] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.314283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314422] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.314448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.314569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.314700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.314834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.314979] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.315166] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.315327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.315476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.315642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.315776] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.315933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.315959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.316064] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.316090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.316195] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.316221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.316344] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.316370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.316482] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.316508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.680 [2024-07-11 21:41:05.316610] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.680 [2024-07-11 21:41:05.316641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.680 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.316759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.316786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.316902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.316929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317059] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317213] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317349] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.317959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.317985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.318147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.318270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.318398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.318553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318669] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.318696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.318859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.318999] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319701] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319847] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.319873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.319978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.320113] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.320248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.320387] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.320552] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.320734] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.320909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.320937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.321081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.321107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.321241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.321266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.321378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.321403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.321515] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.321542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.321657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.321697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.321855] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.321884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322013] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322140] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322714] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.322864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.322892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323189] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323327] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323496] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.323951] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.323977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.324086] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.324112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.324212] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.324239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.681 [2024-07-11 21:41:05.324337] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.681 [2024-07-11 21:41:05.324364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.681 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.324494] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.324520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.324641] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.324681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.324794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.324821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.324955] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.324981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325253] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325386] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325520] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325657] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325799] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.325928] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.325954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326181] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326361] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326502] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326645] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.326938] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.326965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327490] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327763] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.327896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.327922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328282] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.328862] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.328902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329010] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329138] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329597] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329732] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.329874] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.329900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330006] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330133] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330274] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330435] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330720] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.330859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.330885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.331170] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.331195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.331291] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.331317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.331449] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.331474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.331608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.331634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.331739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.331772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.682 [2024-07-11 21:41:05.331875] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.682 [2024-07-11 21:41:05.331901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.682 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.332029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.332160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.332284] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.332421] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.332601] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:30.683 [2024-07-11 21:41:05.332775] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:34:30.683 [2024-07-11 21:41:05.332945] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.332972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.333083] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:34:30.683 [2024-07-11 21:41:05.333219] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.333368] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:30.683 [2024-07-11 21:41:05.333549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.333682] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.683 [2024-07-11 21:41:05.333815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.333948] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.333974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334090] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334225] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334380] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334543] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334818] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.334946] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.334972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335076] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.335103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.335251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335400] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.335426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335560] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.335587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335688] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.335714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335848] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.335876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.335992] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.336019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.336157] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.336183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.336298] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.336324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.336427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.336453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.336564] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.336589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.683 [2024-07-11 21:41:05.336724] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.683 [2024-07-11 21:41:05.336757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.683 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.336866] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.336892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.336994] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337793] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.337918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.337952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338049] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338206] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338595] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.338902] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.338928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.339961] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.339987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.340200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.340226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.340362] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.340389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.340495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.340521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.340629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.340654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.340762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.340789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.340925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.340951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341058] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341625] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.341900] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.341926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.342043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.342082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.342222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.684 [2024-07-11 21:41:05.342250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.684 qpair failed and we were unable to recover it. 00:34:30.684 [2024-07-11 21:41:05.342354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.342381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.342492] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.342519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.342628] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.342654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.342762] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.342789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.342890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.342916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343042] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343175] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343301] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343614] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343761] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.343896] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.343921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344440] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.344832] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.344858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345125] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345279] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345408] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345555] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345835] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.345966] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.345993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346117] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.346143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346243] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.346269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.346430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346540] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.346569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346674] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.346701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346846] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.346873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.346977] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.347144] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.347297] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.347434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.347619] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.347743] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.347881] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.685 [2024-07-11 21:41:05.347908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.685 qpair failed and we were unable to recover it. 00:34:30.685 [2024-07-11 21:41:05.348039] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.348165] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.348290] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.348451] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.348586] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.348767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.348904] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.348930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349173] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349332] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349498] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349631] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.349968] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.349995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350103] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350404] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350532] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350685] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.350957] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.350985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351091] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351217] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351371] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351511] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351648] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351809] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.351970] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.351997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.352109] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.352137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.352264] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.352290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.352446] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.352472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.352599] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.352625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.352769] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.352800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.352933] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.352960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353062] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353396] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353536] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353665] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.686 qpair failed and we were unable to recover it. 00:34:30.686 [2024-07-11 21:41:05.353962] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.686 [2024-07-11 21:41:05.353989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354121] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354276] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354411] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354675] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354802] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.354935] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.354962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.355120] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.355147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.355247] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.355274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.355375] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.355402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.355550] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.355577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.355706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.355732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.355863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.355890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.356031] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.356162] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:30.687 [2024-07-11 21:41:05.356299] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:34:30.687 [2024-07-11 21:41:05.356433] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.356571] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.356703] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.687 [2024-07-11 21:41:05.356861] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.356888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.687 [2024-07-11 21:41:05.357025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.357194] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.357354] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.357486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.357622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.357749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.357923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.357949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.358053] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.358079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.358179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.358204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.358304] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.358330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.358434] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.358461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.687 qpair failed and we were unable to recover it. 00:34:30.687 [2024-07-11 21:41:05.358570] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.687 [2024-07-11 21:41:05.358596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.358692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.358718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.358827] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.358855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359004] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359136] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359271] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.359898] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.359925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360055] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360185] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360704] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.360867] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.360898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361158] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361295] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361425] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361551] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361684] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361815] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.361954] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.361981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362080] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362620] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.362917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.362944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363312] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363453] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363608] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.363952] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.363978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.364118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.364144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.364269] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.688 [2024-07-11 21:41:05.364296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.688 qpair failed and we were unable to recover it. 00:34:30.688 [2024-07-11 21:41:05.364437] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.364463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.364561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.364587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.364696] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.364722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.364831] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.364859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.364990] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365123] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365277] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365412] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365542] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365680] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365842] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.365869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.365976] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.366115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.366268] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.366390] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.366524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.366663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.366851] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.366879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367141] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367321] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367807] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.367941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.367967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368102] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368235] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368363] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368663] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.368959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.368985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.369089] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.369115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.369266] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.369293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.369399] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.369425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.369528] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.369555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.369679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.369706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.369844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.369871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.370001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.370028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.370145] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.370172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.689 qpair failed and we were unable to recover it. 00:34:30.689 [2024-07-11 21:41:05.370278] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.689 [2024-07-11 21:41:05.370305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.370405] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.370431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.370566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.370607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.370726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.370762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.370888] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.370915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371027] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371167] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371456] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371578] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371711] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371864] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.371890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.371991] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.372178] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.372302] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.372459] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.372629] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.372791] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.372969] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.372997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.373139] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.373166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.373296] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.373323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.373430] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.373456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.373594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.373622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.373749] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.373781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.373895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.373922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374028] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374161] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374325] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374622] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.374941] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.374967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375096] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375230] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375365] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375524] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375654] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.375953] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.375980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.376106] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.376132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.376234] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.690 [2024-07-11 21:41:05.376261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.690 qpair failed and we were unable to recover it. 00:34:30.690 [2024-07-11 21:41:05.376393] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.376419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.376567] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.376599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.376737] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.376769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.376895] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.376922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377026] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377160] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377358] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377518] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377800] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.377939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.377966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.378065] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.378101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.378236] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.378262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.378395] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.378422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.378557] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.378589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.378718] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.378746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.378870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.378896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379009] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379174] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379443] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379731] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.379939] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.379969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380074] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380203] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380329] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380486] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380650] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380826] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.380963] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.380989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381115] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.381141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381241] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.381268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381378] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.381404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381544] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.381571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381672] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.381698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381819] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.381847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.381981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.382008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.382118] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.382144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.382244] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.382270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.691 qpair failed and we were unable to recover it. 00:34:30.691 [2024-07-11 21:41:05.382398] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.691 [2024-07-11 21:41:05.382424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.382561] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.382587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.382692] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.382720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.382828] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.382855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.382959] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.382985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383095] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383248] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383403] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383534] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383664] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383794] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.383943] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.383970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.384081] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.384214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 Malloc0 00:34:30.692 [2024-07-11 21:41:05.384339] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.384469] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.384596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.692 [2024-07-11 21:41:05.384739] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:34:30.692 [2024-07-11 21:41:05.384913] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.384939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.692 [2024-07-11 21:41:05.385043] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.692 [2024-07-11 21:41:05.385199] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.385333] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.385465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.385593] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.385748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.385909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.385935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386100] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386252] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386384] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386523] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386652] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386788] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.386914] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.386940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.692 [2024-07-11 21:41:05.387044] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.692 [2024-07-11 21:41:05.387070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.692 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.387207] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.387235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.387364] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.387391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.387519] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.387545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.387658] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.387684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.387853] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.387879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.387986] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 [2024-07-11 21:41:05.388014] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.388155] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.388293] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.388424] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.388585] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.388715] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.388877] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.388904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389011] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389457] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389582] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.389871] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.389901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390014] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390172] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390335] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390746] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.390911] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.390937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391198] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391495] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391618] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391783] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.391949] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.391975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392082] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.392109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392224] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.392250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392360] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.392386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392549] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.392575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392679] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.392706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392844] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.392870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.693 qpair failed and we were unable to recover it. 00:34:30.693 [2024-07-11 21:41:05.392978] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.693 [2024-07-11 21:41:05.393004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.393168] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.393194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.393330] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.393356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.393465] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.393492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.393604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.393630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.393790] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.393817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.393925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.393955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394069] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394214] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394512] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394642] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394782] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.394907] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.394933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395045] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395177] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395341] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395468] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395594] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395748] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.395892] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.395918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.396023] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.396154] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.694 [2024-07-11 21:41:05.396308] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:34:30.694 [2024-07-11 21:41:05.396441] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.694 [2024-07-11 21:41:05.396575] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.396723] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.396869] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.396895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397032] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397345] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397481] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397613] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397787] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.397920] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.397946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.398075] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.398101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.398202] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.398227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.398355] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.694 [2024-07-11 21:41:05.398381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.694 qpair failed and we were unable to recover it. 00:34:30.694 [2024-07-11 21:41:05.398506] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.398531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.398643] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.398669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.398804] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.398831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.398929] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.398954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399222] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399356] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399485] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399640] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399773] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.399915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.399943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400101] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400359] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400491] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400646] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400779] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.400909] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.400936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401051] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401204] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401331] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401462] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401592] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.401868] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.401894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402030] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402324] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402447] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402596] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402730] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.402872] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.402898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403163] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403316] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403438] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403566] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403716] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.403890] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.403919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.695 [2024-07-11 21:41:05.404022] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.695 [2024-07-11 21:41:05.404048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.695 qpair failed and we were unable to recover it. 00:34:30.696 [2024-07-11 21:41:05.404180] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.404207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.696 [2024-07-11 21:41:05.404307] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.404333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:34:30.696 [2024-07-11 21:41:05.404436] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.404462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.696 [2024-07-11 21:41:05.404569] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.404595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.696 [2024-07-11 21:41:05.404693] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.404722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 [2024-07-11 21:41:05.404843] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.404872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 [2024-07-11 21:41:05.404981] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.405007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 [2024-07-11 21:41:05.405108] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.405134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.696 [2024-07-11 21:41:05.405233] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.696 [2024-07-11 21:41:05.405258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.696 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.405381] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.405408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.405516] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.405541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.405638] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.405664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.405781] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.405808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.405915] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.405944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406063] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406193] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406322] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406479] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406637] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406767] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.406923] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.406949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.407054] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.407080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.407192] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.407218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.407323] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.957 [2024-07-11 21:41:05.407349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.957 qpair failed and we were unable to recover it. 00:34:30.957 [2024-07-11 21:41:05.407473] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.407499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.407604] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.407630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.407727] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.407764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.407870] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.407897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408001] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408127] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408288] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408448] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408573] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a8000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408706] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.408859] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.408894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409034] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409201] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409334] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409461] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409588] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409726] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.409863] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.409889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410019] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410188] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410313] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410455] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410624] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410784] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.410918] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.410944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411052] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411208] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411374] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411509] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411644] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411778] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.411906] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.411933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.412068] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.412200] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c1ef20 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.958 [2024-07-11 21:41:05.412338] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:30.958 [2024-07-11 21:41:05.412476] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.958 [2024-07-11 21:41:05.412612] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.958 [2024-07-11 21:41:05.412741] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.958 [2024-07-11 21:41:05.412917] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.958 [2024-07-11 21:41:05.412947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.958 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.413085] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.413116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.413262] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.413293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.413427] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.413463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.413598] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.413629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.413759] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.413791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.413925] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.413954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.414107] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.414136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.414265] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.414296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.414416] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.414447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb798000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.414576] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.414614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.414721] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.414750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.414894] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.414921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415029] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415190] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415317] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415452] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415583] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415747] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.415899] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.415925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.416025] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.416051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.416179] posix.c:1038:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:34:30.959 [2024-07-11 21:41:05.416210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb7a0000b90 with addr=10.0.0.2, port=4420 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.416257] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.959 [2024-07-11 21:41:05.418802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.959 [2024-07-11 21:41:05.418942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.959 [2024-07-11 21:41:05.418970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.959 [2024-07-11 21:41:05.418986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.959 [2024-07-11 21:41:05.418999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.959 [2024-07-11 21:41:05.419036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.959 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:34:30.959 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:34:30.959 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:30.959 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:34:30.959 21:41:05 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 1063320 00:34:30.959 [2024-07-11 21:41:05.428642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.959 [2024-07-11 21:41:05.428786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.959 [2024-07-11 21:41:05.428813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.959 [2024-07-11 21:41:05.428828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.959 [2024-07-11 21:41:05.428841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.959 [2024-07-11 21:41:05.428871] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.438615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.959 [2024-07-11 21:41:05.438717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.959 [2024-07-11 21:41:05.438745] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.959 [2024-07-11 21:41:05.438767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.959 [2024-07-11 21:41:05.438781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.959 [2024-07-11 21:41:05.438812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.448625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.959 [2024-07-11 21:41:05.448735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.959 [2024-07-11 21:41:05.448775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.959 [2024-07-11 21:41:05.448791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.959 [2024-07-11 21:41:05.448804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.959 [2024-07-11 21:41:05.448836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.458668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.959 [2024-07-11 21:41:05.458794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.959 [2024-07-11 21:41:05.458821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.959 [2024-07-11 21:41:05.458836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.959 [2024-07-11 21:41:05.458849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.959 [2024-07-11 21:41:05.458880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.959 qpair failed and we were unable to recover it. 00:34:30.959 [2024-07-11 21:41:05.468623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.959 [2024-07-11 21:41:05.468719] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.959 [2024-07-11 21:41:05.468746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.959 [2024-07-11 21:41:05.468769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.959 [2024-07-11 21:41:05.468783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.468814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.478643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.478765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.478792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.478806] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.478819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.478849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.488648] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.488773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.488800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.488815] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.488828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.488864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.498636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.498744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.498781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.498796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.498809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.498841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.508674] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.508796] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.508823] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.508837] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.508851] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.508880] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.518710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.518826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.518853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.518867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.518880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.518909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.528795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.528907] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.528933] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.528948] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.528961] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.528993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.538862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.538971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.539002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.539017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.539031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.539075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.548844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.548942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.548969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.548983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.548996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.549026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.558864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.558987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.559013] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.559027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.559041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.559070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.568876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.568985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.569012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.569026] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.569039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.569070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.578943] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.579049] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.579075] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.579090] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.579108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.579139] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.588963] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.589065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.589092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.589106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.589119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.589150] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.599012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.599115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.599142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.599156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.599169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.599199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.609095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.960 [2024-07-11 21:41:05.609209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.960 [2024-07-11 21:41:05.609235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.960 [2024-07-11 21:41:05.609249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.960 [2024-07-11 21:41:05.609262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.960 [2024-07-11 21:41:05.609294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.960 qpair failed and we were unable to recover it. 00:34:30.960 [2024-07-11 21:41:05.619032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.619138] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.619164] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.619179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.619192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.619222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.629082] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.629194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.629220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.629234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.629248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.629280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.639086] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.639192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.639217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.639231] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.639244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.639275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.649114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.649241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.649267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.649281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.649295] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.649326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.659172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.659282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.659308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.659322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.659338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.659450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.669184] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.669288] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.669314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.669334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.669349] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.669380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.679219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.679318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.679344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.679358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.679372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.679401] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.689210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.689319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.689345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.689359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.689372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.689403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.699287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.699404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.699432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.699446] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.699459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.699490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.709329] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.709466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.709493] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.709507] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.709520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.709549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:30.961 [2024-07-11 21:41:05.719313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:30.961 [2024-07-11 21:41:05.719424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:30.961 [2024-07-11 21:41:05.719451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:30.961 [2024-07-11 21:41:05.719465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:30.961 [2024-07-11 21:41:05.719478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:30.961 [2024-07-11 21:41:05.719509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:30.961 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.729309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.729417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.729444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.729458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.729472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.729503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.739451] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.739561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.739587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.739601] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.739614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.739644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.749384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.749488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.749514] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.749529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.749542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.749572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.759426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.759553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.759579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.759605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.759619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.759649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.769535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.769666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.769692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.769706] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.769719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.769749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.779464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.779608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.779634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.779648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.779661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.779690] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.789512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.789615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.789641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.789655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.220 [2024-07-11 21:41:05.789667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.220 [2024-07-11 21:41:05.789698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.220 qpair failed and we were unable to recover it. 00:34:31.220 [2024-07-11 21:41:05.799625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.220 [2024-07-11 21:41:05.799727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.220 [2024-07-11 21:41:05.799759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.220 [2024-07-11 21:41:05.799776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.799791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.799821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.809597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.809704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.809730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.809744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.809766] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.809799] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.819661] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.819773] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.819799] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.819814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.819827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.819859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.829644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.829749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.829782] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.829797] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.829810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.829840] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.839623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.839726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.839759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.839776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.839790] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.839821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.849684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.849813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.849845] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.849862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.849878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.849910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.859681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.859790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.859817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.859831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.859844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.859874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.869705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.869843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.869869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.869884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.869897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.869928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.879858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.879994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.880020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.880036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.880050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.880080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.889799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.889909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.889936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.889950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.889963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.889999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.899815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.899922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.899948] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.899962] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.899975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.900006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.909804] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.909900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.909925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.909939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.909952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.909981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.919869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.919985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.920011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.920025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.920037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.920068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.929898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.930001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.930027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.930041] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.930054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.930085] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.221 [2024-07-11 21:41:05.939893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.221 [2024-07-11 21:41:05.939997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.221 [2024-07-11 21:41:05.940030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.221 [2024-07-11 21:41:05.940045] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.221 [2024-07-11 21:41:05.940058] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.221 [2024-07-11 21:41:05.940087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.221 qpair failed and we were unable to recover it. 00:34:31.222 [2024-07-11 21:41:05.949959] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.222 [2024-07-11 21:41:05.950058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.222 [2024-07-11 21:41:05.950083] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.222 [2024-07-11 21:41:05.950097] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.222 [2024-07-11 21:41:05.950110] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.222 [2024-07-11 21:41:05.950141] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.222 qpair failed and we were unable to recover it. 00:34:31.222 [2024-07-11 21:41:05.960013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.222 [2024-07-11 21:41:05.960139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.222 [2024-07-11 21:41:05.960165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.222 [2024-07-11 21:41:05.960179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.222 [2024-07-11 21:41:05.960192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.222 [2024-07-11 21:41:05.960221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.222 qpair failed and we were unable to recover it. 00:34:31.222 [2024-07-11 21:41:05.970013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.222 [2024-07-11 21:41:05.970121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.222 [2024-07-11 21:41:05.970147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.222 [2024-07-11 21:41:05.970161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.222 [2024-07-11 21:41:05.970175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.222 [2024-07-11 21:41:05.970219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.222 qpair failed and we were unable to recover it. 00:34:31.222 [2024-07-11 21:41:05.980035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.222 [2024-07-11 21:41:05.980140] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.222 [2024-07-11 21:41:05.980169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.222 [2024-07-11 21:41:05.980183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.222 [2024-07-11 21:41:05.980202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.222 [2024-07-11 21:41:05.980235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.222 qpair failed and we were unable to recover it. 00:34:31.480 [2024-07-11 21:41:05.990053] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.480 [2024-07-11 21:41:05.990159] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.480 [2024-07-11 21:41:05.990186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.480 [2024-07-11 21:41:05.990200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.480 [2024-07-11 21:41:05.990213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.480 [2024-07-11 21:41:05.990243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.480 qpair failed and we were unable to recover it. 00:34:31.480 [2024-07-11 21:41:06.000068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.480 [2024-07-11 21:41:06.000167] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.480 [2024-07-11 21:41:06.000194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.480 [2024-07-11 21:41:06.000207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.480 [2024-07-11 21:41:06.000219] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.480 [2024-07-11 21:41:06.000248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.480 qpair failed and we were unable to recover it. 00:34:31.480 [2024-07-11 21:41:06.010107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.480 [2024-07-11 21:41:06.010232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.480 [2024-07-11 21:41:06.010259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.010273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.010286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.010315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.020170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.020276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.020302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.020316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.020329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.020374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.030145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.030291] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.030318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.030332] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.030345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.030375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.040203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.040344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.040369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.040383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.040396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.040427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.050247] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.050372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.050397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.050411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.050423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.050452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.060270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.060367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.060393] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.060407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.060420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.060450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.070293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.070396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.070421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.070435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.070454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.070487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.080271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.080369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.080394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.080407] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.080420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.080450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.090410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.090530] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.090557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.090571] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.090584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.090616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.100340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.100445] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.100471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.100486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.100499] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.100529] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.110374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.481 [2024-07-11 21:41:06.110480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.481 [2024-07-11 21:41:06.110506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.481 [2024-07-11 21:41:06.110520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.481 [2024-07-11 21:41:06.110533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.481 [2024-07-11 21:41:06.110563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.481 qpair failed and we were unable to recover it. 00:34:31.481 [2024-07-11 21:41:06.120396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.120494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.120520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.120534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.120546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.120576] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.130461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.130565] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.130590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.130605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.130618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.130647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.140545] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.140660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.140686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.140699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.140713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.140743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.150474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.150575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.150601] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.150615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.150629] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.150659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.160556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.160659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.160685] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.160705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.160719] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.160749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.170572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.170692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.170718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.170732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.170745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.170787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.180596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.180703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.180728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.180742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.180763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.180795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.190651] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.190774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.190801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.190816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.190832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.190862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.200626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.200748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.200781] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.200796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.200808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.200839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.210660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.210770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.210796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.210810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.210823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.210855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.220847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.482 [2024-07-11 21:41:06.220964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.482 [2024-07-11 21:41:06.220990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.482 [2024-07-11 21:41:06.221004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.482 [2024-07-11 21:41:06.221017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.482 [2024-07-11 21:41:06.221048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.482 qpair failed and we were unable to recover it. 00:34:31.482 [2024-07-11 21:41:06.230799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.483 [2024-07-11 21:41:06.230913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.483 [2024-07-11 21:41:06.230939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.483 [2024-07-11 21:41:06.230953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.483 [2024-07-11 21:41:06.230966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.483 [2024-07-11 21:41:06.230997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.483 qpair failed and we were unable to recover it. 00:34:31.483 [2024-07-11 21:41:06.240797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.483 [2024-07-11 21:41:06.240902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.483 [2024-07-11 21:41:06.240928] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.483 [2024-07-11 21:41:06.240941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.483 [2024-07-11 21:41:06.240955] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.483 [2024-07-11 21:41:06.240985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.483 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.250815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.250929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.250960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.250976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.250989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.251020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.260809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.260914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.260939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.260954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.260967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.260998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.270825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.270925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.270951] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.270965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.270978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.271009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.280869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.280971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.280997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.281010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.281023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.281054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.290965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.291075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.291101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.291115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.291128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.291163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.300933] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.301063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.301088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.301102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.301115] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.301147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.310980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.311083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.311108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.311122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.311135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.311166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.320990] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.321096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.321122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.321136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.321149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.321180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.331022] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.331146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.331173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.331187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.331200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.331231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.341042] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.341152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.341182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.341197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.341211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.341243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.351155] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.351269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.351295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.351308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.351321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.351351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.361088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.361193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.361219] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.361234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.742 [2024-07-11 21:41:06.361248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.742 [2024-07-11 21:41:06.361288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.742 qpair failed and we were unable to recover it. 00:34:31.742 [2024-07-11 21:41:06.371148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.742 [2024-07-11 21:41:06.371301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.742 [2024-07-11 21:41:06.371327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.742 [2024-07-11 21:41:06.371341] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.371354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.371384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.381152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.381282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.381307] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.381322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.381335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.381370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.391162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.391266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.391291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.391305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.391318] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.391348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.401294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.401431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.401456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.401471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.401484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.401514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.411293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.411405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.411430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.411444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.411456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.411487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.421236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.421347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.421373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.421386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.421399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.421429] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.431306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.431428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.431454] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.431468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.431481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.431512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.441339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.441443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.441469] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.441483] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.441495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.441538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.451375] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.451485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.451510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.451525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.451538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.451568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.461374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.461503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.461529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.461543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.461556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.461585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.471372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.471476] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.471502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.471516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.471535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.471566] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.481498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.481595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.481620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.481634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.481648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.481677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.491493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.491648] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.491674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.491689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.491702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.491744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:31.743 [2024-07-11 21:41:06.501458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:31.743 [2024-07-11 21:41:06.501573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:31.743 [2024-07-11 21:41:06.501599] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:31.743 [2024-07-11 21:41:06.501613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:31.743 [2024-07-11 21:41:06.501626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:31.743 [2024-07-11 21:41:06.501656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:31.743 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.511596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.511711] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.511738] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.511761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.002 [2024-07-11 21:41:06.511778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.002 [2024-07-11 21:41:06.511811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.002 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.521555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.521704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.521731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.521745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.002 [2024-07-11 21:41:06.521769] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.002 [2024-07-11 21:41:06.521814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.002 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.531570] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.531684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.531710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.531724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.002 [2024-07-11 21:41:06.531737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.002 [2024-07-11 21:41:06.531777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.002 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.541583] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.541686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.541712] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.541726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.002 [2024-07-11 21:41:06.541739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.002 [2024-07-11 21:41:06.541777] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.002 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.551617] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.551717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.551743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.551766] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.002 [2024-07-11 21:41:06.551781] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.002 [2024-07-11 21:41:06.551810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.002 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.561638] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.561766] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.561792] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.561812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.002 [2024-07-11 21:41:06.561826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.002 [2024-07-11 21:41:06.561858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.002 qpair failed and we were unable to recover it. 00:34:32.002 [2024-07-11 21:41:06.571673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.002 [2024-07-11 21:41:06.571794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.002 [2024-07-11 21:41:06.571820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.002 [2024-07-11 21:41:06.571834] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.571847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.571878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.581702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.581824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.581850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.581864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.581877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.581909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.591742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.591855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.591880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.591894] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.591908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.591937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.601780] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.601882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.601909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.601923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.601936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.601979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.611784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.611890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.611916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.611930] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.611943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.611974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.621838] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.621951] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.621977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.621991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.622004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.622035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.631853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.631953] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.631978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.631992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.632005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.632035] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.641960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.642089] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.642114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.642128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.642141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.642171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.651909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.652017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.652047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.652062] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.652075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.652105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.003 [2024-07-11 21:41:06.661931] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.003 [2024-07-11 21:41:06.662047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.003 [2024-07-11 21:41:06.662073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.003 [2024-07-11 21:41:06.662087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.003 [2024-07-11 21:41:06.662100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.003 [2024-07-11 21:41:06.662131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.003 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.671935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.672077] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.672103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.672117] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.672129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.672159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.682024] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.682144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.682170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.682184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.682197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.682227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.692020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.692146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.692171] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.692186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.692199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.692239] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.702026] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.702130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.702155] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.702169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.702183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.702212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.712089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.712190] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.712215] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.712230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.712243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.712273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.722088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.722189] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.722214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.722228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.722241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.722272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.732174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.732329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.732355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.732369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.732382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.732411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.742225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.742329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.742359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.742373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.742386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.742416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.752237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.752346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.004 [2024-07-11 21:41:06.752371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.004 [2024-07-11 21:41:06.752384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.004 [2024-07-11 21:41:06.752398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.004 [2024-07-11 21:41:06.752428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.004 qpair failed and we were unable to recover it. 00:34:32.004 [2024-07-11 21:41:06.762239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.004 [2024-07-11 21:41:06.762358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.005 [2024-07-11 21:41:06.762384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.005 [2024-07-11 21:41:06.762398] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.005 [2024-07-11 21:41:06.762411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.005 [2024-07-11 21:41:06.762441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.005 qpair failed and we were unable to recover it. 00:34:32.264 [2024-07-11 21:41:06.772345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.264 [2024-07-11 21:41:06.772452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.264 [2024-07-11 21:41:06.772478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.264 [2024-07-11 21:41:06.772492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.264 [2024-07-11 21:41:06.772505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.264 [2024-07-11 21:41:06.772536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.264 qpair failed and we were unable to recover it. 00:34:32.264 [2024-07-11 21:41:06.782293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.264 [2024-07-11 21:41:06.782404] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.264 [2024-07-11 21:41:06.782430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.264 [2024-07-11 21:41:06.782444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.264 [2024-07-11 21:41:06.782457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.264 [2024-07-11 21:41:06.782494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.264 qpair failed and we were unable to recover it. 00:34:32.264 [2024-07-11 21:41:06.792283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.792387] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.792412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.792427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.792439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.792469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.802305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.802403] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.802429] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.802443] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.802456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.802486] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.812446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.812552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.812578] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.812591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.812604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.812635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.822432] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.822547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.822574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.822588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.822601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.822630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.832395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.832498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.832524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.832538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.832551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.832582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.842439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.842534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.842560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.842575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.842587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.842618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.852498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.852608] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.852635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.852649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.852662] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.852692] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.862532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.862638] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.862667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.862682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.862694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.862725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.872505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.872610] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.872636] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.872650] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.872669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.872699] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.882597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.882701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.882728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.882742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.882763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.882808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.892672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.892795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.892829] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.892844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.265 [2024-07-11 21:41:06.892857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.265 [2024-07-11 21:41:06.892890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.265 qpair failed and we were unable to recover it. 00:34:32.265 [2024-07-11 21:41:06.902593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.265 [2024-07-11 21:41:06.902694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.265 [2024-07-11 21:41:06.902720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.265 [2024-07-11 21:41:06.902734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.902749] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.902787] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.912620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.912722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.912747] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.912769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.912783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.912813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.922679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.922807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.922833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.922848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.922861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.922892] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.932797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.932920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.932946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.932960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.932973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.933004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.942714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.942825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.942850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.942864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.942878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.942909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.952801] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.952904] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.952930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.952944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.952958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.952987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.962782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.962878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.962904] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.962925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.962938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.962968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.972811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.972912] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.972937] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.972951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.972963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.972994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.982866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.983012] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.983038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.983052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.983065] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.983095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:06.992865] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:06.992964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:06.992990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:06.993004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:06.993017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:06.993047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:07.002986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:07.003086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:07.003112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:07.003127] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:07.003139] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:07.003182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:07.013052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:07.013209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:07.013236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:07.013251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:07.013267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:07.013297] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:07.022975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:07.023094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.266 [2024-07-11 21:41:07.023121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.266 [2024-07-11 21:41:07.023136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.266 [2024-07-11 21:41:07.023152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.266 [2024-07-11 21:41:07.023184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.266 qpair failed and we were unable to recover it. 00:34:32.266 [2024-07-11 21:41:07.033061] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.266 [2024-07-11 21:41:07.033160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.267 [2024-07-11 21:41:07.033187] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.267 [2024-07-11 21:41:07.033201] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.267 [2024-07-11 21:41:07.033214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.267 [2024-07-11 21:41:07.033257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.267 qpair failed and we were unable to recover it. 00:34:32.527 [2024-07-11 21:41:07.043036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.527 [2024-07-11 21:41:07.043153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.527 [2024-07-11 21:41:07.043179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.527 [2024-07-11 21:41:07.043193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.527 [2024-07-11 21:41:07.043206] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.527 [2024-07-11 21:41:07.043237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.527 qpair failed and we were unable to recover it. 00:34:32.527 [2024-07-11 21:41:07.053084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.527 [2024-07-11 21:41:07.053194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.527 [2024-07-11 21:41:07.053218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.527 [2024-07-11 21:41:07.053237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.527 [2024-07-11 21:41:07.053250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.053279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.063065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.063169] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.063194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.063208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.063221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.063251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.073100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.073211] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.073237] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.073251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.073264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.073308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.083198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.083298] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.083324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.083340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.083353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.083383] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.093189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.093359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.093386] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.093402] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.093417] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.093449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.103182] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.103284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.103310] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.103325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.103338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.103370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.113200] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.113301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.113328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.113342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.113355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.113385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.123282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.123386] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.123412] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.123426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.123439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.123470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.133275] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.133385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.133411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.133424] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.133438] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.133469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.143303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.143405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.143436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.143452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.143464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.143494] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.153315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.153417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.153444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.153458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.153470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.153500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.163395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.163517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.163543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.163557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.163570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.163601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.173457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.173561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.528 [2024-07-11 21:41:07.173588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.528 [2024-07-11 21:41:07.173602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.528 [2024-07-11 21:41:07.173616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.528 [2024-07-11 21:41:07.173660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.528 qpair failed and we were unable to recover it. 00:34:32.528 [2024-07-11 21:41:07.183402] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.528 [2024-07-11 21:41:07.183508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.183534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.183548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.183561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.183598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.193522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.193624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.193651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.193665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.193677] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.193708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.203526] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.203636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.203662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.203676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.203688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.203719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.213510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.213637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.213662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.213676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.213689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.213719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.223582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.223692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.223719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.223736] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.223751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.223795] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.233560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.233661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.233693] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.233708] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.233721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.233772] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.243652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.243763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.243789] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.243804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.243817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.243847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.253625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.253739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.253772] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.253787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.253801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.253831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.263653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.263782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.263807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.263821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.263834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.263864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.273676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.273782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.273807] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.273821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.273839] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.273870] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.283704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.529 [2024-07-11 21:41:07.283867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.529 [2024-07-11 21:41:07.283893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.529 [2024-07-11 21:41:07.283908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.529 [2024-07-11 21:41:07.283920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.529 [2024-07-11 21:41:07.283949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.529 qpair failed and we were unable to recover it. 00:34:32.529 [2024-07-11 21:41:07.293716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.530 [2024-07-11 21:41:07.293861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.530 [2024-07-11 21:41:07.293887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.530 [2024-07-11 21:41:07.293901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.530 [2024-07-11 21:41:07.293914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.530 [2024-07-11 21:41:07.293944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.530 qpair failed and we were unable to recover it. 00:34:32.789 [2024-07-11 21:41:07.303742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.789 [2024-07-11 21:41:07.303856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.789 [2024-07-11 21:41:07.303882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.789 [2024-07-11 21:41:07.303896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.789 [2024-07-11 21:41:07.303909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.789 [2024-07-11 21:41:07.303939] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.789 qpair failed and we were unable to recover it. 00:34:32.789 [2024-07-11 21:41:07.313788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.789 [2024-07-11 21:41:07.313894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.789 [2024-07-11 21:41:07.313920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.789 [2024-07-11 21:41:07.313935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.789 [2024-07-11 21:41:07.313949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.789 [2024-07-11 21:41:07.313979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.789 qpair failed and we were unable to recover it. 00:34:32.789 [2024-07-11 21:41:07.323824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.789 [2024-07-11 21:41:07.323938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.789 [2024-07-11 21:41:07.323964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.789 [2024-07-11 21:41:07.323978] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.789 [2024-07-11 21:41:07.323991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.789 [2024-07-11 21:41:07.324023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.789 qpair failed and we were unable to recover it. 00:34:32.789 [2024-07-11 21:41:07.333853] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.789 [2024-07-11 21:41:07.333960] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.789 [2024-07-11 21:41:07.333987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.789 [2024-07-11 21:41:07.334001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.789 [2024-07-11 21:41:07.334014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.789 [2024-07-11 21:41:07.334045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.789 qpair failed and we were unable to recover it. 00:34:32.789 [2024-07-11 21:41:07.343869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.789 [2024-07-11 21:41:07.343988] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.789 [2024-07-11 21:41:07.344014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.789 [2024-07-11 21:41:07.344029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.789 [2024-07-11 21:41:07.344041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.789 [2024-07-11 21:41:07.344087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.789 qpair failed and we were unable to recover it. 00:34:32.789 [2024-07-11 21:41:07.353940] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.354062] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.354087] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.354102] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.354114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.354145] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.363920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.364019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.364045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.364065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.364080] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.364110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.373957] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.374069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.374094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.374108] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.374121] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.374151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.384122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.384257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.384283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.384297] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.384309] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.384339] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.394027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.394147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.394172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.394186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.394198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.394229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.404041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.404176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.404202] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.404216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.404229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.404259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.414112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.414238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.414264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.414278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.414292] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.414323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.424106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.424212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.424238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.424251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.424263] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.424294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.434144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.434246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.434271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.434286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.434298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.434330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.444145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.444250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.444275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.444289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.444302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.444332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.454232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.454339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.454365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.454385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.454399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.454442] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.464213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.464315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.464342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.464356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.464368] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.464398] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.474238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.474358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.474395] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.474409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.474422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.474452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.484262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.484362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.484389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.790 [2024-07-11 21:41:07.484403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.790 [2024-07-11 21:41:07.484416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.790 [2024-07-11 21:41:07.484446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.790 qpair failed and we were unable to recover it. 00:34:32.790 [2024-07-11 21:41:07.494358] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.790 [2024-07-11 21:41:07.494470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.790 [2024-07-11 21:41:07.494496] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.494510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.494523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.494569] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:32.791 [2024-07-11 21:41:07.504347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.791 [2024-07-11 21:41:07.504502] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.791 [2024-07-11 21:41:07.504530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.504544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.504557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.504590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:32.791 [2024-07-11 21:41:07.514416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.791 [2024-07-11 21:41:07.514524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.791 [2024-07-11 21:41:07.514552] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.514566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.514579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.514610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:32.791 [2024-07-11 21:41:07.524399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.791 [2024-07-11 21:41:07.524501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.791 [2024-07-11 21:41:07.524526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.524540] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.524554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.524596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:32.791 [2024-07-11 21:41:07.534410] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.791 [2024-07-11 21:41:07.534517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.791 [2024-07-11 21:41:07.534543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.534557] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.534569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.534599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:32.791 [2024-07-11 21:41:07.544524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.791 [2024-07-11 21:41:07.544641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.791 [2024-07-11 21:41:07.544673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.544689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.544702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.544732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:32.791 [2024-07-11 21:41:07.554584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:32.791 [2024-07-11 21:41:07.554720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:32.791 [2024-07-11 21:41:07.554746] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:32.791 [2024-07-11 21:41:07.554769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:32.791 [2024-07-11 21:41:07.554783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:32.791 [2024-07-11 21:41:07.554813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:32.791 qpair failed and we were unable to recover it. 00:34:33.050 [2024-07-11 21:41:07.564555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.050 [2024-07-11 21:41:07.564658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.050 [2024-07-11 21:41:07.564684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.050 [2024-07-11 21:41:07.564698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.050 [2024-07-11 21:41:07.564710] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.050 [2024-07-11 21:41:07.564740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.050 qpair failed and we were unable to recover it. 00:34:33.050 [2024-07-11 21:41:07.574544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.050 [2024-07-11 21:41:07.574654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.050 [2024-07-11 21:41:07.574680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.050 [2024-07-11 21:41:07.574694] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.050 [2024-07-11 21:41:07.574707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.050 [2024-07-11 21:41:07.574738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.050 qpair failed and we were unable to recover it. 00:34:33.050 [2024-07-11 21:41:07.584652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.584761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.584787] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.584801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.584814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.584855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.594590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.594694] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.594719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.594733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.594746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.594785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.604606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.604703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.604728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.604743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.604763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.604798] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.614689] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.614808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.614835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.614850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.614866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.614896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.624687] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.624807] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.624833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.624848] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.624862] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.624893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.634789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.634901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.634932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.634947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.634960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.634990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.644722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.644837] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.644863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.644877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.644890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.644920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.654793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.654906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.654932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.654946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.654959] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.655002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.664813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.664923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.664950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.664964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.664977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.665006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.674817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.674915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.674941] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.674955] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.674974] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.675005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.684849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.684956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.684982] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.684996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.685009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.685040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.694924] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.695029] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.695054] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.695068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.695082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.695124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.704917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.705019] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.705045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.705059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.705072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.705102] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.714980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.715079] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.715105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.051 [2024-07-11 21:41:07.715119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.051 [2024-07-11 21:41:07.715132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.051 [2024-07-11 21:41:07.715162] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.051 qpair failed and we were unable to recover it. 00:34:33.051 [2024-07-11 21:41:07.724960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.051 [2024-07-11 21:41:07.725065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.051 [2024-07-11 21:41:07.725091] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.725105] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.725117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.725148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.735018] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.735130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.735156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.735169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.735183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.735213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.745041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.745168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.745197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.745212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.745225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.745257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.755041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.755143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.755169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.755184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.755197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.755227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.765071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.765200] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.765225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.765239] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.765257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.765288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.775113] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.775219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.775244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.775258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.775271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.775300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.785162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.785308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.785334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.785348] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.785360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.785391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.795212] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.795318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.795344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.795358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.795372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.795403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.805232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.805336] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.805362] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.805376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.805388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.805419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.052 [2024-07-11 21:41:07.815283] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.052 [2024-07-11 21:41:07.815423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.052 [2024-07-11 21:41:07.815452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.052 [2024-07-11 21:41:07.815467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.052 [2024-07-11 21:41:07.815480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.052 [2024-07-11 21:41:07.815510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.052 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.825245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.825350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.825376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.825391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.825404] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.825436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.835310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.835417] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.835443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.835457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.835470] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.835500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.845298] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.845425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.845452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.845466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.845477] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.845508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.855369] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.855485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.855512] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.855536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.855551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.855583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.865361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.865481] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.865507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.865521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.865534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.865564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.875379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.875482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.875509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.875523] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.875536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.875567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.885448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.885548] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.311 [2024-07-11 21:41:07.885574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.311 [2024-07-11 21:41:07.885588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.311 [2024-07-11 21:41:07.885601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.311 [2024-07-11 21:41:07.885631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.311 qpair failed and we were unable to recover it. 00:34:33.311 [2024-07-11 21:41:07.895523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.311 [2024-07-11 21:41:07.895646] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.895672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.895686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.895699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.895741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.905489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.905625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.905650] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.905665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.905678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.905708] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.915591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.915700] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.915726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.915741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.915762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.915808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.925579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.925679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.925704] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.925718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.925731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.925774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.935573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.935698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.935723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.935737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.935751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.935791] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.945626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.945740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.945780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.945796] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.945809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.945839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.955663] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.955776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.955802] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.955816] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.955829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.955859] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.965789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.965918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.965944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.965959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.965972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.966002] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.975695] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.975802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.975828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.975842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.975855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.975886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.985744] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.985862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.985887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.985901] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.985914] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.985950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:07.995779] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:07.995889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:07.995914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:07.995928] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:07.995941] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:07.995972] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:08.005778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:08.005880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:08.005905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:08.005919] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:08.005931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:08.005961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:08.015806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:08.015915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:08.015940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:08.015954] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:08.015967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:08.015997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:08.025842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:08.025964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:08.025990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.312 [2024-07-11 21:41:08.026004] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.312 [2024-07-11 21:41:08.026017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.312 [2024-07-11 21:41:08.026049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.312 qpair failed and we were unable to recover it. 00:34:33.312 [2024-07-11 21:41:08.035883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.312 [2024-07-11 21:41:08.035987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.312 [2024-07-11 21:41:08.036017] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.313 [2024-07-11 21:41:08.036032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.313 [2024-07-11 21:41:08.036045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.313 [2024-07-11 21:41:08.036075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.313 qpair failed and we were unable to recover it. 00:34:33.313 [2024-07-11 21:41:08.045889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.313 [2024-07-11 21:41:08.045994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.313 [2024-07-11 21:41:08.046020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.313 [2024-07-11 21:41:08.046034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.313 [2024-07-11 21:41:08.046047] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.313 [2024-07-11 21:41:08.046076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.313 qpair failed and we were unable to recover it. 00:34:33.313 [2024-07-11 21:41:08.055950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.313 [2024-07-11 21:41:08.056069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.313 [2024-07-11 21:41:08.056093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.313 [2024-07-11 21:41:08.056107] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.313 [2024-07-11 21:41:08.056119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.313 [2024-07-11 21:41:08.056148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.313 qpair failed and we were unable to recover it. 00:34:33.313 [2024-07-11 21:41:08.066029] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.313 [2024-07-11 21:41:08.066130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.313 [2024-07-11 21:41:08.066156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.313 [2024-07-11 21:41:08.066171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.313 [2024-07-11 21:41:08.066184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.313 [2024-07-11 21:41:08.066228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.313 qpair failed and we were unable to recover it. 00:34:33.313 [2024-07-11 21:41:08.075987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.313 [2024-07-11 21:41:08.076092] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.313 [2024-07-11 21:41:08.076117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.313 [2024-07-11 21:41:08.076131] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.313 [2024-07-11 21:41:08.076149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.313 [2024-07-11 21:41:08.076194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.313 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.086106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.086246] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.086272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.086286] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.086299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.572 [2024-07-11 21:41:08.086329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.572 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.096065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.096185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.096210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.096225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.096237] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.572 [2024-07-11 21:41:08.096267] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.572 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.106099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.106249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.106274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.106288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.106302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.572 [2024-07-11 21:41:08.106332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.572 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.116088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.116199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.116227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.116242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.116257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.572 [2024-07-11 21:41:08.116289] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.572 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.126124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.126226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.126253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.126267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.126281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.572 [2024-07-11 21:41:08.126310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.572 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.136143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.136260] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.136286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.136300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.136314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.572 [2024-07-11 21:41:08.136344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.572 qpair failed and we were unable to recover it. 00:34:33.572 [2024-07-11 21:41:08.146206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.572 [2024-07-11 21:41:08.146315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.572 [2024-07-11 21:41:08.146342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.572 [2024-07-11 21:41:08.146357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.572 [2024-07-11 21:41:08.146369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.146412] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.156226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.156334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.156360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.156375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.156388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.156418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.166259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.166358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.166384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.166399] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.166420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.166451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.176293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.176412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.176440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.176455] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.176468] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.176500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.186333] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.186460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.186486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.186500] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.186513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.186542] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.196457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.196562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.196589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.196603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.196616] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.196659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.206361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.206461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.206487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.206501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.206513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.206544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.216461] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.216604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.216630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.216644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.216657] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.216687] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.226514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.226630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.226656] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.226670] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.226683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.226712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.236515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.236643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.236669] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.236683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.236696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.236726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.246528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.246632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.246660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.246674] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.246687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.246717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.256587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.256701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.256727] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.256747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.256770] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.256801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.266610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.266717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.266743] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.266765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.266780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.266810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.276573] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.276671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.276696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.573 [2024-07-11 21:41:08.276710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.573 [2024-07-11 21:41:08.276723] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.573 [2024-07-11 21:41:08.276761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.573 qpair failed and we were unable to recover it. 00:34:33.573 [2024-07-11 21:41:08.286606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.573 [2024-07-11 21:41:08.286764] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.573 [2024-07-11 21:41:08.286790] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.574 [2024-07-11 21:41:08.286804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.574 [2024-07-11 21:41:08.286817] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.574 [2024-07-11 21:41:08.286846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.574 qpair failed and we were unable to recover it. 00:34:33.574 [2024-07-11 21:41:08.296703] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.574 [2024-07-11 21:41:08.296873] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.574 [2024-07-11 21:41:08.296898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.574 [2024-07-11 21:41:08.296912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.574 [2024-07-11 21:41:08.296925] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.574 [2024-07-11 21:41:08.296955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.574 qpair failed and we were unable to recover it. 00:34:33.574 [2024-07-11 21:41:08.306738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.574 [2024-07-11 21:41:08.306849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.574 [2024-07-11 21:41:08.306875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.574 [2024-07-11 21:41:08.306889] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.574 [2024-07-11 21:41:08.306902] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.574 [2024-07-11 21:41:08.306933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.574 qpair failed and we were unable to recover it. 00:34:33.574 [2024-07-11 21:41:08.316675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.574 [2024-07-11 21:41:08.316777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.574 [2024-07-11 21:41:08.316803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.574 [2024-07-11 21:41:08.316817] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.574 [2024-07-11 21:41:08.316830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.574 [2024-07-11 21:41:08.316862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.574 qpair failed and we were unable to recover it. 00:34:33.574 [2024-07-11 21:41:08.326712] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.574 [2024-07-11 21:41:08.326833] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.574 [2024-07-11 21:41:08.326859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.574 [2024-07-11 21:41:08.326873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.574 [2024-07-11 21:41:08.326885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.574 [2024-07-11 21:41:08.326915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.574 qpair failed and we were unable to recover it. 00:34:33.574 [2024-07-11 21:41:08.336791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.574 [2024-07-11 21:41:08.336903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.574 [2024-07-11 21:41:08.336929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.574 [2024-07-11 21:41:08.336943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.574 [2024-07-11 21:41:08.336956] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.574 [2024-07-11 21:41:08.337000] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.574 qpair failed and we were unable to recover it. 00:34:33.833 [2024-07-11 21:41:08.346769] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.833 [2024-07-11 21:41:08.346886] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.833 [2024-07-11 21:41:08.346916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.833 [2024-07-11 21:41:08.346931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.833 [2024-07-11 21:41:08.346944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.833 [2024-07-11 21:41:08.346974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.833 qpair failed and we were unable to recover it. 00:34:33.833 [2024-07-11 21:41:08.356808] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.833 [2024-07-11 21:41:08.356914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.833 [2024-07-11 21:41:08.356939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.833 [2024-07-11 21:41:08.356953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.833 [2024-07-11 21:41:08.356966] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.833 [2024-07-11 21:41:08.356997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.833 qpair failed and we were unable to recover it. 00:34:33.833 [2024-07-11 21:41:08.366816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.833 [2024-07-11 21:41:08.366956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.833 [2024-07-11 21:41:08.366981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.833 [2024-07-11 21:41:08.366996] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.833 [2024-07-11 21:41:08.367008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.833 [2024-07-11 21:41:08.367038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.833 qpair failed and we were unable to recover it. 00:34:33.833 [2024-07-11 21:41:08.376965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.833 [2024-07-11 21:41:08.377082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.833 [2024-07-11 21:41:08.377110] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.833 [2024-07-11 21:41:08.377124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.833 [2024-07-11 21:41:08.377137] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.833 [2024-07-11 21:41:08.377168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.833 qpair failed and we were unable to recover it. 00:34:33.833 [2024-07-11 21:41:08.386881] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.833 [2024-07-11 21:41:08.386996] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.387022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.387036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.387050] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.387086] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.396948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.397053] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.397079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.397093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.397106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.397138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.406960] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.407081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.407107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.407120] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.407133] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.407163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.417017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.417121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.417146] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.417161] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.417174] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.417203] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.427021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.427168] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.427194] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.427208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.427221] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.427266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.437120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.437252] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.437282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.437298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.437311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.437342] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.447065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.447172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.447197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.447211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.447224] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.447254] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.457078] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.457212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.457239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.457255] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.457269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.457299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.467091] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.467243] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.467269] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.467283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.467296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.467327] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.477205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.477345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.477371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.477385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.477399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.477448] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.487159] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.487293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.487319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.487333] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.487347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.487378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.497214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.497329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.497355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.497369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.497382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.497411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.507257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.507363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.507389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.507403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.507416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.507446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.517231] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.517335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.517361] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.834 [2024-07-11 21:41:08.517375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.834 [2024-07-11 21:41:08.517388] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.834 [2024-07-11 21:41:08.517419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.834 qpair failed and we were unable to recover it. 00:34:33.834 [2024-07-11 21:41:08.527313] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.834 [2024-07-11 21:41:08.527420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.834 [2024-07-11 21:41:08.527446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.527461] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.527474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.527504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.537330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.537441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.537467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.537481] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.537495] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.537525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.547385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.547494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.547522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.547538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.547551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.547584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.557386] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.557493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.557520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.557534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.557547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.557577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.567417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.567519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.567545] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.567559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.567578] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.567610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.577404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.577510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.577536] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.577549] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.577562] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.577592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.587484] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.587589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.587616] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.587630] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.587644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.587674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:33.835 [2024-07-11 21:41:08.597528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:33.835 [2024-07-11 21:41:08.597633] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:33.835 [2024-07-11 21:41:08.597659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:33.835 [2024-07-11 21:41:08.597673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:33.835 [2024-07-11 21:41:08.597687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:33.835 [2024-07-11 21:41:08.597716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:33.835 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.607510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.607644] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.607672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.607686] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.607704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.607736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.617543] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.617669] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.617696] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.617712] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.617725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.617765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.627569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.627689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.627715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.627729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.627742] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.627781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.637630] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.637788] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.637815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.637829] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.637842] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.637873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.647621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.647733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.647767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.647785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.647802] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.647834] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.657643] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.657809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.657835] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.657856] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.657870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.657913] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.667641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.667751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.667788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.667802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.667815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.667846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.677740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.677859] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.677886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.677900] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.677913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.677944] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.687694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.687825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.687851] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.687865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.687878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.687910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.697763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.697878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.697903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.094 [2024-07-11 21:41:08.697917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.094 [2024-07-11 21:41:08.697931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.094 [2024-07-11 21:41:08.697961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.094 qpair failed and we were unable to recover it. 00:34:34.094 [2024-07-11 21:41:08.707786] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.094 [2024-07-11 21:41:08.707896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.094 [2024-07-11 21:41:08.707921] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.707934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.707947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.707978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.717807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.717920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.717946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.717960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.717973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.718004] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.727911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.728043] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.728069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.728083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.728095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.728127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.737890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.738017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.738043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.738057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.738070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.738101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.747918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.748025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.748050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.748071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.748085] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.748116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.757953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.758060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.758086] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.758100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.758113] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.758157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.767953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.768085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.768111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.768125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.768138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.768168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.778052] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.778158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.778183] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.778197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.778210] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.778240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.788006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.788107] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.788131] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.788145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.788158] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.788188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.798059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.798185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.798209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.798222] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.798234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.798263] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.808096] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.808214] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.808241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.808256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.808269] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.808300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.818121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.818235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.818261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.818275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.818289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.818320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.828126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.828282] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.828308] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.828322] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.828336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.828366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.838205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.838349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.838380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.838395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.095 [2024-07-11 21:41:08.838408] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.095 [2024-07-11 21:41:08.838455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.095 qpair failed and we were unable to recover it. 00:34:34.095 [2024-07-11 21:41:08.848201] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.095 [2024-07-11 21:41:08.848309] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.095 [2024-07-11 21:41:08.848335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.095 [2024-07-11 21:41:08.848349] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.096 [2024-07-11 21:41:08.848364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.096 [2024-07-11 21:41:08.848395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.096 qpair failed and we were unable to recover it. 00:34:34.096 [2024-07-11 21:41:08.858264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.096 [2024-07-11 21:41:08.858389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.096 [2024-07-11 21:41:08.858415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.096 [2024-07-11 21:41:08.858429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.096 [2024-07-11 21:41:08.858443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.096 [2024-07-11 21:41:08.858475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.096 qpair failed and we were unable to recover it. 00:34:34.355 [2024-07-11 21:41:08.868235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.355 [2024-07-11 21:41:08.868354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.355 [2024-07-11 21:41:08.868380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.355 [2024-07-11 21:41:08.868395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.355 [2024-07-11 21:41:08.868409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.355 [2024-07-11 21:41:08.868440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.355 qpair failed and we were unable to recover it. 00:34:34.355 [2024-07-11 21:41:08.878290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.355 [2024-07-11 21:41:08.878406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.355 [2024-07-11 21:41:08.878433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.355 [2024-07-11 21:41:08.878448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.355 [2024-07-11 21:41:08.878462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.355 [2024-07-11 21:41:08.878499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.355 qpair failed and we were unable to recover it. 00:34:34.355 [2024-07-11 21:41:08.888382] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.355 [2024-07-11 21:41:08.888492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.355 [2024-07-11 21:41:08.888519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.355 [2024-07-11 21:41:08.888534] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.355 [2024-07-11 21:41:08.888548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.355 [2024-07-11 21:41:08.888592] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.355 qpair failed and we were unable to recover it. 00:34:34.355 [2024-07-11 21:41:08.898347] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.355 [2024-07-11 21:41:08.898455] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.355 [2024-07-11 21:41:08.898482] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.355 [2024-07-11 21:41:08.898496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.355 [2024-07-11 21:41:08.898510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.355 [2024-07-11 21:41:08.898540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.355 qpair failed and we were unable to recover it. 00:34:34.355 [2024-07-11 21:41:08.908361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.908472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.908500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.908515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.908528] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.908558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.918384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.918498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.918524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.918539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.918552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.918582] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.928465] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.928585] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.928619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.928635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.928648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.928678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.938514] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.938627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.938654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.938669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.938683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.938726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.948475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.948581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.948608] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.948623] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.948636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.948667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.958580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.958687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.958713] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.958727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.958741] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.958781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.968527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.968651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.968677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.968692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.968711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.968744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.978569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.978698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.978724] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.978739] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.978759] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.978793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.988593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.988703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.988729] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.988744] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.988765] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.988797] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:08.998650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:08.998763] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:08.998791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:08.998805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:08.998819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:08.998850] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:09.008717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:09.008872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:09.008899] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:09.008914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:09.008926] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:09.008970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:09.018784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:09.018909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:09.018936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:09.018952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:09.018965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:09.018996] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:09.028702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:09.028824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:09.028850] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:09.028865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:09.028879] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:09.028911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:09.038812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:09.038909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:09.038936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:09.038953] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.356 [2024-07-11 21:41:09.038967] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.356 [2024-07-11 21:41:09.038998] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.356 qpair failed and we were unable to recover it. 00:34:34.356 [2024-07-11 21:41:09.048742] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.356 [2024-07-11 21:41:09.048862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.356 [2024-07-11 21:41:09.048887] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.356 [2024-07-11 21:41:09.048902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.048916] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.048948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.058781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.058890] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.058916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.058935] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.058949] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.058980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.068892] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.068993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.069020] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.069036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.069049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.069080] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.078849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.079011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.079038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.079054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.079082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.079113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.088902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.089014] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.089039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.089054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.089067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.089099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.098893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.099009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.099034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.099049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.099063] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.099093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.109033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.109144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.109170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.109186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.109200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.109230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.357 [2024-07-11 21:41:09.118973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.357 [2024-07-11 21:41:09.119081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.357 [2024-07-11 21:41:09.119107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.357 [2024-07-11 21:41:09.119121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.357 [2024-07-11 21:41:09.119134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.357 [2024-07-11 21:41:09.119164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.357 qpair failed and we were unable to recover it. 00:34:34.616 [2024-07-11 21:41:09.129037] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.616 [2024-07-11 21:41:09.129170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.616 [2024-07-11 21:41:09.129197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.616 [2024-07-11 21:41:09.129214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.616 [2024-07-11 21:41:09.129228] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.616 [2024-07-11 21:41:09.129259] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.616 qpair failed and we were unable to recover it. 00:34:34.616 [2024-07-11 21:41:09.139041] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.616 [2024-07-11 21:41:09.139153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.616 [2024-07-11 21:41:09.139178] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.616 [2024-07-11 21:41:09.139193] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.616 [2024-07-11 21:41:09.139207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.616 [2024-07-11 21:41:09.139237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.616 qpair failed and we were unable to recover it. 00:34:34.616 [2024-07-11 21:41:09.149049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.616 [2024-07-11 21:41:09.149175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.616 [2024-07-11 21:41:09.149204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.616 [2024-07-11 21:41:09.149224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.616 [2024-07-11 21:41:09.149239] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.616 [2024-07-11 21:41:09.149271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.616 qpair failed and we were unable to recover it. 00:34:34.616 [2024-07-11 21:41:09.159085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.616 [2024-07-11 21:41:09.159192] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.616 [2024-07-11 21:41:09.159220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.159236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.159250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.159282] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.169098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.169230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.169258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.169273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.169287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.169317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.179181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.179307] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.179334] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.179350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.179364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.179396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.189299] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.189427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.189455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.189470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.189483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.189515] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.199203] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.199310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.199335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.199350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.199364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.199395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.209290] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.209399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.209427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.209444] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.209458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.209489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.219280] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.219389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.219414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.219428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.219440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.219472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.229272] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.229378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.229403] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.229418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.229431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.229462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.239365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.239467] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.239498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.239513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.239527] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.239558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.249318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.249425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.249450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.249465] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.249478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.249510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.259372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.259478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.259503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.259518] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.259531] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.259562] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.269497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.269602] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.269628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.269643] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.269656] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.269701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.279544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.279674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.279702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.279718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.279732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.279792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.289434] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.289532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.289558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.289572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.289585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.617 [2024-07-11 21:41:09.289616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.617 qpair failed and we were unable to recover it. 00:34:34.617 [2024-07-11 21:41:09.299475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.617 [2024-07-11 21:41:09.299581] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.617 [2024-07-11 21:41:09.299606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.617 [2024-07-11 21:41:09.299621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.617 [2024-07-11 21:41:09.299634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.299665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.309512] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.309627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.309655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.309671] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.309684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.309717] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.319529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.319628] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.319654] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.319668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.319681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.319712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.329593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.329744] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.329791] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.329808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.329823] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.329868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.339614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.339723] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.339749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.339772] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.339788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.339819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.349642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.349780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.349806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.349821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.349834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.349865] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.359722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.359862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.359889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.359905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.359919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.359950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.369665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.369772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.369798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.369812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.369831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.369862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.618 [2024-07-11 21:41:09.379844] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.618 [2024-07-11 21:41:09.379974] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.618 [2024-07-11 21:41:09.379998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.618 [2024-07-11 21:41:09.380014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.618 [2024-07-11 21:41:09.380027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.618 [2024-07-11 21:41:09.380060] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.618 qpair failed and we were unable to recover it. 00:34:34.878 [2024-07-11 21:41:09.389768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.878 [2024-07-11 21:41:09.389878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.878 [2024-07-11 21:41:09.389906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.878 [2024-07-11 21:41:09.389921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.878 [2024-07-11 21:41:09.389934] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.878 [2024-07-11 21:41:09.389979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.878 qpair failed and we were unable to recover it. 00:34:34.878 [2024-07-11 21:41:09.399788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.878 [2024-07-11 21:41:09.399897] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.878 [2024-07-11 21:41:09.399924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.878 [2024-07-11 21:41:09.399939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.878 [2024-07-11 21:41:09.399952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.878 [2024-07-11 21:41:09.399983] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.878 qpair failed and we were unable to recover it. 00:34:34.878 [2024-07-11 21:41:09.409812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.878 [2024-07-11 21:41:09.409924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.878 [2024-07-11 21:41:09.409949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.878 [2024-07-11 21:41:09.409964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.878 [2024-07-11 21:41:09.409977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.878 [2024-07-11 21:41:09.410009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.878 qpair failed and we were unable to recover it. 00:34:34.878 [2024-07-11 21:41:09.419840] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.878 [2024-07-11 21:41:09.419999] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.878 [2024-07-11 21:41:09.420027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.878 [2024-07-11 21:41:09.420043] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.878 [2024-07-11 21:41:09.420057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.878 [2024-07-11 21:41:09.420087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.878 qpair failed and we were unable to recover it. 00:34:34.878 [2024-07-11 21:41:09.429861] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.878 [2024-07-11 21:41:09.429985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.878 [2024-07-11 21:41:09.430010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.430025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.430038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.430068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.439878] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.439976] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.440001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.440015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.440029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.440072] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.449907] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.450010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.450036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.450050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.450064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.450095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.459965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.460081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.460106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.460121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.460140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.460173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.470056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.470162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.470188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.470202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.470215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.470246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.480068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.480170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.480195] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.480210] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.480223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.480268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.490045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.490160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.490186] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.490200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.490215] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.490247] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.500069] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.500173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.500198] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.500213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.500226] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.500257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.510085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.510187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.510212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.510227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.510240] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.510271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.520140] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.520249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.520275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.520290] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.520303] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.520333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.530139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.530284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.530312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.530327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.530341] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.530372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.540267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.540380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.540406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.540420] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.540434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.540465] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.550228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.550335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.550360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.550381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.550396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.550440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.560228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.560329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.560354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.879 [2024-07-11 21:41:09.560369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.879 [2024-07-11 21:41:09.560382] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.879 [2024-07-11 21:41:09.560415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.879 qpair failed and we were unable to recover it. 00:34:34.879 [2024-07-11 21:41:09.570271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.879 [2024-07-11 21:41:09.570389] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.879 [2024-07-11 21:41:09.570414] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.570429] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.570442] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.570474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.580324] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.580431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.580456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.580471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.580485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.580517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.590361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.590494] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.590520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.590535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.590548] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.590578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.600356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.600460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.600487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.600501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.600514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.600545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.610419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.610541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.610567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.610583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.610597] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.610628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.620398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.620499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.620525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.620539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.620552] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.620583] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.630440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.630593] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.630619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.630634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.630647] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.630679] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:34.880 [2024-07-11 21:41:09.640475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:34.880 [2024-07-11 21:41:09.640627] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:34.880 [2024-07-11 21:41:09.640660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:34.880 [2024-07-11 21:41:09.640676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:34.880 [2024-07-11 21:41:09.640690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:34.880 [2024-07-11 21:41:09.640720] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:34.880 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.650515] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.650617] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.650643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.650658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.650671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.650703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.660551] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.660663] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.660688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.660703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.660717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.660747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.670584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.670691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.670721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.670738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.670763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.670814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.680585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.680688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.680715] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.680730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.680744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.680794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.690611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.690713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.690740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.690765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.690782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.690813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.700667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.700777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.700804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.700819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.700832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.700864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.710665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.710799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.710826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.710842] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.710855] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.710889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.720716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.720830] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.720857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.720872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.720885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.720916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.730737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.730851] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.730883] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.730899] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.730912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.730943] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.740750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.740865] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.740892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.740906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.740919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.740951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.750803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.750911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.750936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.750951] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.750965] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.750995] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.760908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.761047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.140 [2024-07-11 21:41:09.761073] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.140 [2024-07-11 21:41:09.761088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.140 [2024-07-11 21:41:09.761101] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.140 [2024-07-11 21:41:09.761147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.140 qpair failed and we were unable to recover it. 00:34:35.140 [2024-07-11 21:41:09.770873] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.140 [2024-07-11 21:41:09.770989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.771022] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.771036] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.771057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.771088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.780971] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.781082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.781108] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.781123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.781136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.781167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.790905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.791008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.791034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.791049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.791061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.791092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.800950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.801052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.801078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.801093] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.801106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.801136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.810969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.811069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.811095] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.811110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.811123] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.811153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.820994] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.821102] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.821128] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.821143] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.821156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.821187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.831002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.831106] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.831132] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.831147] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.831160] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.831191] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.841062] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.841160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.841185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.841199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.841211] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.841242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.851065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.851194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.851220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.851235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.851249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.851293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.861101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.861203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.861228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.861243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.861262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.861307] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.871122] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.871235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.871260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.871274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.871288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.871318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.881180] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.881286] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.881311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.881325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.881337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.881368] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.891165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.891265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.891289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.141 [2024-07-11 21:41:09.891302] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.141 [2024-07-11 21:41:09.891315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.141 [2024-07-11 21:41:09.891346] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.141 qpair failed and we were unable to recover it. 00:34:35.141 [2024-07-11 21:41:09.901239] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.141 [2024-07-11 21:41:09.901345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.141 [2024-07-11 21:41:09.901370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.142 [2024-07-11 21:41:09.901384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.142 [2024-07-11 21:41:09.901397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.142 [2024-07-11 21:41:09.901428] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.142 qpair failed and we were unable to recover it. 00:34:35.400 [2024-07-11 21:41:09.911236] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.400 [2024-07-11 21:41:09.911361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.400 [2024-07-11 21:41:09.911385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.400 [2024-07-11 21:41:09.911401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.400 [2024-07-11 21:41:09.911415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.400 [2024-07-11 21:41:09.911446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.400 qpair failed and we were unable to recover it. 00:34:35.400 [2024-07-11 21:41:09.921262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.400 [2024-07-11 21:41:09.921362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.400 [2024-07-11 21:41:09.921387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.400 [2024-07-11 21:41:09.921401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.400 [2024-07-11 21:41:09.921414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.400 [2024-07-11 21:41:09.921444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.400 qpair failed and we were unable to recover it. 00:34:35.400 [2024-07-11 21:41:09.931317] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.400 [2024-07-11 21:41:09.931420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.400 [2024-07-11 21:41:09.931444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.400 [2024-07-11 21:41:09.931459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.400 [2024-07-11 21:41:09.931472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.400 [2024-07-11 21:41:09.931503] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.400 qpair failed and we were unable to recover it. 00:34:35.400 [2024-07-11 21:41:09.941338] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.400 [2024-07-11 21:41:09.941489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.400 [2024-07-11 21:41:09.941516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.400 [2024-07-11 21:41:09.941532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.400 [2024-07-11 21:41:09.941545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.400 [2024-07-11 21:41:09.941575] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:09.951430] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:09.951539] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:09.951563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:09.951583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:09.951598] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:09.951629] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:09.961383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:09.961485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:09.961510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:09.961524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:09.961537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:09.961581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:09.971499] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:09.971600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:09.971626] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:09.971640] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:09.971654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:09.971684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:09.981507] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:09.981622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:09.981647] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:09.981662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:09.981675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:09.981707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:09.991458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:09.991559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:09.991585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:09.991599] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:09.991614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:09.991645] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.001585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.001699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.001728] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.001759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.001778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.001811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.011584] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.011704] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.011733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.011749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.011771] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.011806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.021571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.021722] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.021749] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.021776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.021792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.021824] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.031571] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.031674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.031701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.031715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.031729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.031771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.041669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.041783] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.041816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.041831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.041844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.041877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.051641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.051741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.051775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.051791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.051804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.051835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.061658] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.061771] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.061795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.061809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.061822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.061852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.071694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.071799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.071826] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.401 [2024-07-11 21:41:10.071841] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.401 [2024-07-11 21:41:10.071854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.401 [2024-07-11 21:41:10.071884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.401 qpair failed and we were unable to recover it. 00:34:35.401 [2024-07-11 21:41:10.081725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.401 [2024-07-11 21:41:10.081864] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.401 [2024-07-11 21:41:10.081891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.081906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.081920] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.081958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.091784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.091908] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.091934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.091949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.091964] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.091994] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.101785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.101896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.101922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.101936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.101951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.101981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.111884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.112001] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.112039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.112053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.112067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.112098] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.121824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.121928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.121954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.121969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.121983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.122013] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.131857] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.131975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.132006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.132021] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.132035] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.132065] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.141898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.142009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.142035] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.142057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.142070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.142101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.151952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.152087] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.152113] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.152128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.152142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.152172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.402 [2024-07-11 21:41:10.161951] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.402 [2024-07-11 21:41:10.162094] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.402 [2024-07-11 21:41:10.162121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.402 [2024-07-11 21:41:10.162135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.402 [2024-07-11 21:41:10.162149] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.402 [2024-07-11 21:41:10.162179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.402 qpair failed and we were unable to recover it. 00:34:35.661 [2024-07-11 21:41:10.171985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.172104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.172130] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.172145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.172159] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.172196] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.182030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.182144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.182170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.182185] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.182199] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.182229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.192125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.192253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.192280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.192295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.192310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.192352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.202071] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.202176] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.202203] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.202218] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.202232] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.202275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.212087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.212198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.212227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.212243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.212257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.212288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.222209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.222327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.222354] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.222369] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.222383] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.222413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.232153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.232287] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.232314] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.232328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.232342] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.232373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.242294] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.242406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.242432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.242447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.242460] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.242490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.252248] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.252354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.252380] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.252395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.252409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.252441] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.262349] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.262506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.262533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.262547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.262569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.262602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.272367] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.272517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.272544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.272559] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.272572] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.272602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.282296] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.282400] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.282426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.282441] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.282455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.282485] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.292332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.292439] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.292465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.292480] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.292493] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.292524] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.302439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.302549] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.302575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.302589] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.662 [2024-07-11 21:41:10.302603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.662 [2024-07-11 21:41:10.302634] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.662 qpair failed and we were unable to recover it. 00:34:35.662 [2024-07-11 21:41:10.312364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.662 [2024-07-11 21:41:10.312485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.662 [2024-07-11 21:41:10.312511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.662 [2024-07-11 21:41:10.312526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.312540] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.312570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.322378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.322485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.322511] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.322526] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.322541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.322571] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.332517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.332626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.332653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.332668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.332682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.332725] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.342576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.342715] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.342742] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.342764] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.342780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.342811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.352511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.352665] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.352692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.352713] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.352727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.352766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.362634] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.362782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.362809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.362824] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.362838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.362869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.372537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.372650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.372677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.372692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.372706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.372737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.382622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.382733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.382767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.382792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.382807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.382838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.392585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.392692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.392718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.392733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.392747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.392785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.402662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.402809] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.402837] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.402852] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.402866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.402896] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.412652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.412772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.412798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.412813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.412826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.412858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.663 [2024-07-11 21:41:10.422676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.663 [2024-07-11 21:41:10.422789] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.663 [2024-07-11 21:41:10.422816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.663 [2024-07-11 21:41:10.422830] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.663 [2024-07-11 21:41:10.422844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.663 [2024-07-11 21:41:10.422876] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.663 qpair failed and we were unable to recover it. 00:34:35.922 [2024-07-11 21:41:10.432705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.922 [2024-07-11 21:41:10.432834] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.922 [2024-07-11 21:41:10.432861] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.922 [2024-07-11 21:41:10.432876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.922 [2024-07-11 21:41:10.432890] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.922 [2024-07-11 21:41:10.432922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.922 qpair failed and we were unable to recover it. 00:34:35.922 [2024-07-11 21:41:10.442743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.922 [2024-07-11 21:41:10.442856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.922 [2024-07-11 21:41:10.442885] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.922 [2024-07-11 21:41:10.442907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.922 [2024-07-11 21:41:10.442922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.922 [2024-07-11 21:41:10.442953] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.922 qpair failed and we were unable to recover it. 00:34:35.922 [2024-07-11 21:41:10.452767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.922 [2024-07-11 21:41:10.452874] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.922 [2024-07-11 21:41:10.452901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.922 [2024-07-11 21:41:10.452915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.922 [2024-07-11 21:41:10.452930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.922 [2024-07-11 21:41:10.452960] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.922 qpair failed and we were unable to recover it. 00:34:35.922 [2024-07-11 21:41:10.462825] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.922 [2024-07-11 21:41:10.462941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.922 [2024-07-11 21:41:10.462968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.922 [2024-07-11 21:41:10.462983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.922 [2024-07-11 21:41:10.462997] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.922 [2024-07-11 21:41:10.463027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.922 qpair failed and we were unable to recover it. 00:34:35.922 [2024-07-11 21:41:10.472807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.922 [2024-07-11 21:41:10.472959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.922 [2024-07-11 21:41:10.472986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.922 [2024-07-11 21:41:10.473001] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.922 [2024-07-11 21:41:10.473015] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.473045] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.482859] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.482973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.483000] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.483015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.483029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.483059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.492906] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.493017] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.493043] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.493059] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.493072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.493103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.502913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.503027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.503053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.503068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.503082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.503112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.513047] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.513161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.513188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.513203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.513216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.513260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.522972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.523095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.523122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.523136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.523150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.523180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.532999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.533109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.533140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.533156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.533171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.533201] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.543059] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.543164] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.543191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.543206] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.543220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.543252] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.553057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.553179] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.553206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.553225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.553241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.553272] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.563172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.563293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.563320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.563335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.563350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.563380] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.573090] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.573197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.573223] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.573238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.573252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.573288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.583133] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.583238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.583263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.583277] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.583291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.583322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.593150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.593267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.593293] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.593307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.593321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.593352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.923 [2024-07-11 21:41:10.603205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.923 [2024-07-11 21:41:10.603330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.923 [2024-07-11 21:41:10.603356] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.923 [2024-07-11 21:41:10.603371] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.923 [2024-07-11 21:41:10.603385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.923 [2024-07-11 21:41:10.603415] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.923 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.613260] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.613374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.613400] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.613414] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.613428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.613459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.623343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.623450] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.623480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.623496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.623510] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.623540] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.633293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.633402] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.633430] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.633445] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.633458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.633489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.643312] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.643424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.643450] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.643466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.643479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.643509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.653351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.653465] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.653491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.653505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.653519] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.653550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.663371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.663483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.663510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.663525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.663544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.663577] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.673404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.673519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.673546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.673561] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.673575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.673605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:35.924 [2024-07-11 21:41:10.683442] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:35.924 [2024-07-11 21:41:10.683547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:35.924 [2024-07-11 21:41:10.683572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:35.924 [2024-07-11 21:41:10.683587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:35.924 [2024-07-11 21:41:10.683601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:35.924 [2024-07-11 21:41:10.683631] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:35.924 qpair failed and we were unable to recover it. 00:34:36.183 [2024-07-11 21:41:10.693447] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.183 [2024-07-11 21:41:10.693556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.183 [2024-07-11 21:41:10.693582] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.183 [2024-07-11 21:41:10.693597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.183 [2024-07-11 21:41:10.693611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.183 [2024-07-11 21:41:10.693641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.183 qpair failed and we were unable to recover it. 00:34:36.183 [2024-07-11 21:41:10.703485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.183 [2024-07-11 21:41:10.703626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.183 [2024-07-11 21:41:10.703653] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.183 [2024-07-11 21:41:10.703668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.183 [2024-07-11 21:41:10.703681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.183 [2024-07-11 21:41:10.703712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.183 qpair failed and we were unable to recover it. 00:34:36.183 [2024-07-11 21:41:10.713527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.183 [2024-07-11 21:41:10.713647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.183 [2024-07-11 21:41:10.713673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.183 [2024-07-11 21:41:10.713687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.183 [2024-07-11 21:41:10.713701] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.183 [2024-07-11 21:41:10.713732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.183 qpair failed and we were unable to recover it. 00:34:36.183 [2024-07-11 21:41:10.723539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.183 [2024-07-11 21:41:10.723654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.183 [2024-07-11 21:41:10.723680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.183 [2024-07-11 21:41:10.723695] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.183 [2024-07-11 21:41:10.723709] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.183 [2024-07-11 21:41:10.723740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.183 qpair failed and we were unable to recover it. 00:34:36.183 [2024-07-11 21:41:10.733587] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.733705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.733731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.733745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.733767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.733800] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.743614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.743733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.743767] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.743784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.743798] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.743828] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.753635] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.753748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.753780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.753801] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.753816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.753847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.763666] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.763798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.763824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.763839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.763853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.763884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.773711] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.773850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.773876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.773891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.773905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.773936] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.783738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.783866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.783892] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.783907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.783921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.783952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.793734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.793850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.793876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.793891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.793905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.793935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.803822] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.803926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.803952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.803967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.803980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.804010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.813821] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.813930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.813956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.813970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.813984] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.814014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.823862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.823979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.824005] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.824019] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.824045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.824076] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.833883] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.833990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.834018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.834033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.834057] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.834088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.843876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.844004] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.844030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.844060] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.844075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.844107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.853946] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.854065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.854101] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.854121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.854136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.854168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.863967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.864072] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.864099] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.864114] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.184 [2024-07-11 21:41:10.864128] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.184 [2024-07-11 21:41:10.864159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.184 qpair failed and we were unable to recover it. 00:34:36.184 [2024-07-11 21:41:10.874010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.184 [2024-07-11 21:41:10.874147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.184 [2024-07-11 21:41:10.874173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.184 [2024-07-11 21:41:10.874189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.874203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.874233] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.884114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.884233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.884258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.884274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.884289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.884320] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.894050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.894157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.894184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.894199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.894213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.894243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.904161] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.904276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.904304] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.904319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.904332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.904362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.914143] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.914259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.914286] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.914300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.914314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.914344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.924130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.924231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.924257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.924271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.924285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.924315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.934147] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.934259] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.934291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.934311] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.934327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.934359] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.185 [2024-07-11 21:41:10.944235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.185 [2024-07-11 21:41:10.944362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.185 [2024-07-11 21:41:10.944390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.185 [2024-07-11 21:41:10.944405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.185 [2024-07-11 21:41:10.944422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.185 [2024-07-11 21:41:10.944468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.185 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:10.954209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:10.954316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:10.954343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:10.954358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:10.954372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:10.954402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:10.964232] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:10.964338] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:10.964365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:10.964381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:10.964395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:10.964425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:10.974292] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:10.974405] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:10.974432] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:10.974447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:10.974461] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:10.974497] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:10.984318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:10.984425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:10.984452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:10.984466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:10.984480] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:10.984511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:10.994325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:10.994433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:10.994459] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:10.994474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:10.994488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:10.994518] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:11.004368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:11.004478] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:11.004505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:11.004521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:11.004535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:11.004578] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:11.014374] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:11.014517] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:11.014543] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:11.014558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:11.014571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:11.014601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:11.024481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:11.024600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:11.024631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:11.024649] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:11.024663] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:11.024706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:11.034443] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:11.034551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:11.034577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:11.034591] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:11.034605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:11.034637] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:11.044489] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.445 [2024-07-11 21:41:11.044594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.445 [2024-07-11 21:41:11.044621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.445 [2024-07-11 21:41:11.044636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.445 [2024-07-11 21:41:11.044649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.445 [2024-07-11 21:41:11.044680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.445 qpair failed and we were unable to recover it. 00:34:36.445 [2024-07-11 21:41:11.054497] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.054625] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.054652] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.054667] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.054680] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.054726] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.064593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.064712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.064737] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.064759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.064779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.064823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.074566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.074680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.074706] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.074721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.074735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.074783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.084576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.084676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.084701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.084716] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.084728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.084768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.094613] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.094750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.094794] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.094809] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.094822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.094853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.104652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.104802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.104828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.104844] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.104857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.104887] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.114659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.114772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.114798] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.114812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.114825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.114855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.124725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.124831] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.124856] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.124871] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.124885] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.124915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.134730] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.134858] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.134888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.134904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.134918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.134951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.144806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.144952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.144979] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.144995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.145009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.145040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.154819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.154931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.154956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.154972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.154991] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.155024] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.164819] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.164923] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.164949] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.164963] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.164977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.165007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.174926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.175075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.175103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.175119] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.175132] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.175163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.184858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:34:36.446 [2024-07-11 21:41:11.184965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:34:36.446 [2024-07-11 21:41:11.184990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:34:36.446 [2024-07-11 21:41:11.185005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:34:36.446 [2024-07-11 21:41:11.185019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb7a0000b90 00:34:36.446 [2024-07-11 21:41:11.185049] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 [2024-07-11 21:41:11.185188] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:34:36.446 A controller has encountered a failure and is being reset. 00:34:36.446 qpair failed and we were unable to recover it. 00:34:36.446 Controller properly reset. 00:34:36.706 Initializing NVMe Controllers 00:34:36.706 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:36.706 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:34:36.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:34:36.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:34:36.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:34:36.706 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:34:36.706 Initialization complete. Launching workers. 00:34:36.706 Starting thread on core 1 00:34:36.706 Starting thread on core 2 00:34:36.706 Starting thread on core 3 00:34:36.706 Starting thread on core 0 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:34:36.706 00:34:36.706 real 0m10.711s 00:34:36.706 user 0m18.194s 00:34:36.706 sys 0m5.536s 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:34:36.706 ************************************ 00:34:36.706 END TEST nvmf_target_disconnect_tc2 00:34:36.706 ************************************ 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:34:36.706 rmmod nvme_tcp 00:34:36.706 rmmod nvme_fabrics 00:34:36.706 rmmod nvme_keyring 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 1063727 ']' 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 1063727 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 1063727 ']' 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 1063727 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1063727 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:34:36.706 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1063727' 00:34:36.706 killing process with pid 1063727 00:34:36.707 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 1063727 00:34:36.707 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 1063727 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:34:36.965 21:41:11 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.501 21:41:13 nvmf_tcp.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:34:39.501 00:34:39.501 real 0m15.462s 00:34:39.501 user 0m44.102s 00:34:39.501 sys 0m7.458s 00:34:39.501 21:41:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:39.501 21:41:13 nvmf_tcp.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:34:39.501 ************************************ 00:34:39.501 END TEST nvmf_target_disconnect 00:34:39.501 ************************************ 00:34:39.501 21:41:13 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:34:39.501 21:41:13 nvmf_tcp -- nvmf/nvmf.sh@126 -- # timing_exit host 00:34:39.501 21:41:13 nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:39.501 21:41:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.501 21:41:13 nvmf_tcp -- nvmf/nvmf.sh@128 -- # trap - SIGINT SIGTERM EXIT 00:34:39.501 00:34:39.501 real 27m10.090s 00:34:39.501 user 74m7.764s 00:34:39.501 sys 6m22.148s 00:34:39.501 21:41:13 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:39.501 21:41:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.501 ************************************ 00:34:39.501 END TEST nvmf_tcp 00:34:39.501 ************************************ 00:34:39.501 21:41:13 -- common/autotest_common.sh@1142 -- # return 0 00:34:39.501 21:41:13 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:34:39.501 21:41:13 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:39.501 21:41:13 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:39.501 21:41:13 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:39.501 21:41:13 -- common/autotest_common.sh@10 -- # set +x 00:34:39.501 ************************************ 00:34:39.501 START TEST spdkcli_nvmf_tcp 00:34:39.501 ************************************ 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:34:39.501 * Looking for test storage... 00:34:39.501 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.501 21:41:13 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=1064939 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 1064939 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 1064939 ']' 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:39.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:34:39.502 21:41:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 [2024-07-11 21:41:13.879217] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:34:39.502 [2024-07-11 21:41:13.879310] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1064939 ] 00:34:39.502 EAL: No free 2048 kB hugepages reported on node 1 00:34:39.502 [2024-07-11 21:41:13.936640] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:34:39.502 [2024-07-11 21:41:14.021761] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:34:39.502 [2024-07-11 21:41:14.021763] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:39.502 21:41:14 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:34:39.502 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:34:39.502 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:34:39.502 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:34:39.502 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:34:39.502 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:34:39.502 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:34:39.502 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:39.502 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:39.502 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:34:39.502 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:34:39.502 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:34:39.502 ' 00:34:42.039 [2024-07-11 21:41:16.719794] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:43.414 [2024-07-11 21:41:17.936125] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:34:45.940 [2024-07-11 21:41:20.191226] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:34:47.864 [2024-07-11 21:41:22.137363] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:34:49.240 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:34:49.240 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:34:49.240 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:34:49.240 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:34:49.240 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:34:49.240 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:34:49.240 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:34:49.240 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:49.240 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:49.240 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:34:49.240 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:34:49.240 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:34:49.240 21:41:23 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:34:49.498 21:41:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:49.499 21:41:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:34:49.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:34:49.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:49.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:34:49.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:34:49.499 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:34:49.499 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:34:49.499 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:34:49.499 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:34:49.499 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:34:49.499 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:34:49.499 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:34:49.499 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:34:49.499 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:34:49.499 ' 00:34:54.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:34:54.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:34:54.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:54.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:34:54.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:34:54.773 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:34:54.773 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:34:54.773 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:34:54.773 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:34:54.773 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:34:54.773 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:34:54.773 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:34:54.773 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:34:54.773 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 1064939 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1064939 ']' 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1064939 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1064939 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1064939' 00:34:54.773 killing process with pid 1064939 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 1064939 00:34:54.773 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 1064939 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 1064939 ']' 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 1064939 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 1064939 ']' 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 1064939 00:34:55.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1064939) - No such process 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 1064939 is not found' 00:34:55.031 Process with pid 1064939 is not found 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:34:55.031 00:34:55.031 real 0m15.925s 00:34:55.031 user 0m33.645s 00:34:55.031 sys 0m0.783s 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:34:55.031 21:41:29 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:34:55.031 ************************************ 00:34:55.031 END TEST spdkcli_nvmf_tcp 00:34:55.031 ************************************ 00:34:55.031 21:41:29 -- common/autotest_common.sh@1142 -- # return 0 00:34:55.031 21:41:29 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:55.031 21:41:29 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:34:55.031 21:41:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:34:55.031 21:41:29 -- common/autotest_common.sh@10 -- # set +x 00:34:55.031 ************************************ 00:34:55.031 START TEST nvmf_identify_passthru 00:34:55.031 ************************************ 00:34:55.031 21:41:29 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:34:55.031 * Looking for test storage... 00:34:55.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:55.031 21:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:55.031 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.031 21:41:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.031 21:41:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.031 21:41:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.031 21:41:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.031 21:41:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:34:55.032 21:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:55.032 21:41:29 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:55.032 21:41:29 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:55.032 21:41:29 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:34:55.032 21:41:29 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:55.032 21:41:29 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:34:55.032 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:55.032 21:41:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:34:55.032 21:41:29 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:55.290 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:34:55.290 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:34:55.290 21:41:29 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:34:55.290 21:41:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:34:57.244 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:34:57.244 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:34:57.244 Found net devices under 0000:0a:00.0: cvl_0_0 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:34:57.244 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:34:57.245 Found net devices under 0000:0a:00.1: cvl_0_1 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:34:57.245 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:57.245 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.246 ms 00:34:57.245 00:34:57.245 --- 10.0.0.2 ping statistics --- 00:34:57.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.245 rtt min/avg/max/mdev = 0.246/0.246/0.246/0.000 ms 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:57.245 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:57.245 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:34:57.245 00:34:57.245 --- 10.0.0.1 ping statistics --- 00:34:57.245 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:57.245 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:34:57.245 21:41:31 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:34:57.245 21:41:31 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:34:57.245 21:41:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:34:57.245 EAL: No free 2048 kB hugepages reported on node 1 00:35:01.438 21:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:35:01.438 21:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:35:01.438 21:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:35:01.438 21:41:36 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:35:01.438 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=1069432 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:35:05.629 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 1069432 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 1069432 ']' 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.629 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:05.629 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.629 [2024-07-11 21:41:40.311399] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:05.629 [2024-07-11 21:41:40.311485] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.629 EAL: No free 2048 kB hugepages reported on node 1 00:35:05.629 [2024-07-11 21:41:40.384085] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.889 [2024-07-11 21:41:40.480760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.889 [2024-07-11 21:41:40.480830] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.889 [2024-07-11 21:41:40.480846] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.889 [2024-07-11 21:41:40.480860] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.889 [2024-07-11 21:41:40.480871] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.889 [2024-07-11 21:41:40.484779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.889 [2024-07-11 21:41:40.484830] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.889 [2024-07-11 21:41:40.484921] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.889 [2024-07-11 21:41:40.484918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:35:05.889 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.889 INFO: Log level set to 20 00:35:05.889 INFO: Requests: 00:35:05.889 { 00:35:05.889 "jsonrpc": "2.0", 00:35:05.889 "method": "nvmf_set_config", 00:35:05.889 "id": 1, 00:35:05.889 "params": { 00:35:05.889 "admin_cmd_passthru": { 00:35:05.889 "identify_ctrlr": true 00:35:05.889 } 00:35:05.889 } 00:35:05.889 } 00:35:05.889 00:35:05.889 INFO: response: 00:35:05.889 { 00:35:05.889 "jsonrpc": "2.0", 00:35:05.889 "id": 1, 00:35:05.889 "result": true 00:35:05.889 } 00:35:05.889 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:05.889 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:05.889 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:05.889 INFO: Setting log level to 20 00:35:05.889 INFO: Setting log level to 20 00:35:05.889 INFO: Log level set to 20 00:35:05.889 INFO: Log level set to 20 00:35:05.889 INFO: Requests: 00:35:05.889 { 00:35:05.889 "jsonrpc": "2.0", 00:35:05.889 "method": "framework_start_init", 00:35:05.889 "id": 1 00:35:05.889 } 00:35:05.889 00:35:05.889 INFO: Requests: 00:35:05.889 { 00:35:05.889 "jsonrpc": "2.0", 00:35:05.889 "method": "framework_start_init", 00:35:05.889 "id": 1 00:35:05.889 } 00:35:05.889 00:35:06.149 [2024-07-11 21:41:40.661178] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:35:06.149 INFO: response: 00:35:06.149 { 00:35:06.150 "jsonrpc": "2.0", 00:35:06.150 "id": 1, 00:35:06.150 "result": true 00:35:06.150 } 00:35:06.150 00:35:06.150 INFO: response: 00:35:06.150 { 00:35:06.150 "jsonrpc": "2.0", 00:35:06.150 "id": 1, 00:35:06.150 "result": true 00:35:06.150 } 00:35:06.150 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.150 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:06.150 INFO: Setting log level to 40 00:35:06.150 INFO: Setting log level to 40 00:35:06.150 INFO: Setting log level to 40 00:35:06.150 [2024-07-11 21:41:40.671453] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:06.150 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:06.150 21:41:40 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:06.150 21:41:40 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.434 Nvme0n1 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.434 [2024-07-11 21:41:43.563177] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.434 [ 00:35:09.434 { 00:35:09.434 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:35:09.434 "subtype": "Discovery", 00:35:09.434 "listen_addresses": [], 00:35:09.434 "allow_any_host": true, 00:35:09.434 "hosts": [] 00:35:09.434 }, 00:35:09.434 { 00:35:09.434 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:35:09.434 "subtype": "NVMe", 00:35:09.434 "listen_addresses": [ 00:35:09.434 { 00:35:09.434 "trtype": "TCP", 00:35:09.434 "adrfam": "IPv4", 00:35:09.434 "traddr": "10.0.0.2", 00:35:09.434 "trsvcid": "4420" 00:35:09.434 } 00:35:09.434 ], 00:35:09.434 "allow_any_host": true, 00:35:09.434 "hosts": [], 00:35:09.434 "serial_number": "SPDK00000000000001", 00:35:09.434 "model_number": "SPDK bdev Controller", 00:35:09.434 "max_namespaces": 1, 00:35:09.434 "min_cntlid": 1, 00:35:09.434 "max_cntlid": 65519, 00:35:09.434 "namespaces": [ 00:35:09.434 { 00:35:09.434 "nsid": 1, 00:35:09.434 "bdev_name": "Nvme0n1", 00:35:09.434 "name": "Nvme0n1", 00:35:09.434 "nguid": "C92C2CD9E95E4A9997070247BC6DCF24", 00:35:09.434 "uuid": "c92c2cd9-e95e-4a99-9707-0247bc6dcf24" 00:35:09.434 } 00:35:09.434 ] 00:35:09.434 } 00:35:09.434 ] 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:35:09.434 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:35:09.434 EAL: No free 2048 kB hugepages reported on node 1 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:35:09.434 21:41:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:35:09.434 rmmod nvme_tcp 00:35:09.434 rmmod nvme_fabrics 00:35:09.434 rmmod nvme_keyring 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 1069432 ']' 00:35:09.434 21:41:43 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 1069432 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 1069432 ']' 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 1069432 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:35:09.434 21:41:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1069432 00:35:09.434 21:41:44 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:35:09.434 21:41:44 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:35:09.434 21:41:44 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1069432' 00:35:09.434 killing process with pid 1069432 00:35:09.434 21:41:44 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 1069432 00:35:09.434 21:41:44 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 1069432 00:35:10.808 21:41:45 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:35:10.808 21:41:45 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:35:10.808 21:41:45 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:35:10.808 21:41:45 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:35:10.808 21:41:45 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:35:10.808 21:41:45 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:10.808 21:41:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:10.808 21:41:45 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.339 21:41:47 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:35:13.339 00:35:13.339 real 0m17.874s 00:35:13.339 user 0m26.686s 00:35:13.339 sys 0m2.262s 00:35:13.339 21:41:47 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:13.339 21:41:47 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:35:13.339 ************************************ 00:35:13.339 END TEST nvmf_identify_passthru 00:35:13.339 ************************************ 00:35:13.339 21:41:47 -- common/autotest_common.sh@1142 -- # return 0 00:35:13.339 21:41:47 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:13.339 21:41:47 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:13.339 21:41:47 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:13.339 21:41:47 -- common/autotest_common.sh@10 -- # set +x 00:35:13.339 ************************************ 00:35:13.339 START TEST nvmf_dif 00:35:13.339 ************************************ 00:35:13.339 21:41:47 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:35:13.339 * Looking for test storage... 00:35:13.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:35:13.339 21:41:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:13.339 21:41:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:13.340 21:41:47 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:13.340 21:41:47 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:13.340 21:41:47 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:13.340 21:41:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.340 21:41:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.340 21:41:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.340 21:41:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:35:13.340 21:41:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:35:13.340 21:41:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:35:13.340 21:41:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:35:13.340 21:41:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:35:13.340 21:41:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:35:13.340 21:41:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:13.340 21:41:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:35:13.340 21:41:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:35:13.340 21:41:47 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:35:13.340 21:41:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:15.239 21:41:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:35:15.239 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:35:15.240 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:35:15.240 Found net devices under 0000:0a:00.0: cvl_0_0 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:35:15.240 Found net devices under 0000:0a:00.1: cvl_0_1 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:35:15.240 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:15.240 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.235 ms 00:35:15.240 00:35:15.240 --- 10.0.0.2 ping statistics --- 00:35:15.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.240 rtt min/avg/max/mdev = 0.235/0.235/0.235/0.000 ms 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:15.240 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:15.240 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.122 ms 00:35:15.240 00:35:15.240 --- 10.0.0.1 ping statistics --- 00:35:15.240 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:15.240 rtt min/avg/max/mdev = 0.122/0.122/0.122/0.000 ms 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:35:15.240 21:41:49 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:16.174 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:16.174 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:16.174 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:16.174 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:16.174 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:16.174 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:16.174 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:16.174 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:16.174 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:16.174 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:35:16.174 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:35:16.174 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:35:16.174 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:35:16.174 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:35:16.174 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:35:16.174 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:35:16.174 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:35:16.433 21:41:51 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:35:16.433 21:41:51 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=1072689 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:35:16.433 21:41:51 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 1072689 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 1072689 ']' 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:16.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:35:16.433 21:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.433 [2024-07-11 21:41:51.113387] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:35:16.433 [2024-07-11 21:41:51.113474] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:16.433 EAL: No free 2048 kB hugepages reported on node 1 00:35:16.433 [2024-07-11 21:41:51.179883] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:16.692 [2024-07-11 21:41:51.268737] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:16.692 [2024-07-11 21:41:51.268811] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:16.692 [2024-07-11 21:41:51.268825] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:16.692 [2024-07-11 21:41:51.268836] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:16.692 [2024-07-11 21:41:51.268846] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:16.692 [2024-07-11 21:41:51.268880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:35:16.692 21:41:51 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.692 21:41:51 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:16.692 21:41:51 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:35:16.692 21:41:51 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.692 [2024-07-11 21:41:51.417661] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.692 21:41:51 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:16.692 21:41:51 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:16.692 ************************************ 00:35:16.692 START TEST fio_dif_1_default 00:35:16.692 ************************************ 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:16.692 bdev_null0 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.692 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:16.957 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.957 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:16.958 [2024-07-11 21:41:51.478018] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:16.958 { 00:35:16.958 "params": { 00:35:16.958 "name": "Nvme$subsystem", 00:35:16.958 "trtype": "$TEST_TRANSPORT", 00:35:16.958 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:16.958 "adrfam": "ipv4", 00:35:16.958 "trsvcid": "$NVMF_PORT", 00:35:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:16.958 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:16.958 "hdgst": ${hdgst:-false}, 00:35:16.958 "ddgst": ${ddgst:-false} 00:35:16.958 }, 00:35:16.958 "method": "bdev_nvme_attach_controller" 00:35:16.958 } 00:35:16.958 EOF 00:35:16.958 )") 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:16.958 "params": { 00:35:16.958 "name": "Nvme0", 00:35:16.958 "trtype": "tcp", 00:35:16.958 "traddr": "10.0.0.2", 00:35:16.958 "adrfam": "ipv4", 00:35:16.958 "trsvcid": "4420", 00:35:16.958 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:16.958 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:16.958 "hdgst": false, 00:35:16.958 "ddgst": false 00:35:16.958 }, 00:35:16.958 "method": "bdev_nvme_attach_controller" 00:35:16.958 }' 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:16.958 21:41:51 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:17.216 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:17.216 fio-3.35 00:35:17.216 Starting 1 thread 00:35:17.216 EAL: No free 2048 kB hugepages reported on node 1 00:35:29.455 00:35:29.455 filename0: (groupid=0, jobs=1): err= 0: pid=1072915: Thu Jul 11 21:42:02 2024 00:35:29.455 read: IOPS=174, BW=699KiB/s (716kB/s)(7008KiB/10021msec) 00:35:29.455 slat (nsec): min=4425, max=51726, avg=9261.48, stdev=2531.48 00:35:29.455 clat (usec): min=582, max=48570, avg=22849.13, stdev=20362.20 00:35:29.455 lat (usec): min=590, max=48584, avg=22858.39, stdev=20361.97 00:35:29.455 clat percentiles (usec): 00:35:29.455 | 1.00th=[ 603], 5.00th=[ 619], 10.00th=[ 627], 20.00th=[ 644], 00:35:29.455 | 30.00th=[ 652], 40.00th=[ 676], 50.00th=[41157], 60.00th=[41157], 00:35:29.455 | 70.00th=[41157], 80.00th=[41157], 90.00th=[42206], 95.00th=[42206], 00:35:29.455 | 99.00th=[42206], 99.50th=[42206], 99.90th=[48497], 99.95th=[48497], 00:35:29.455 | 99.99th=[48497] 00:35:29.455 bw ( KiB/s): min= 576, max= 768, per=99.95%, avg=699.20, stdev=70.63, samples=20 00:35:29.455 iops : min= 144, max= 192, avg=174.80, stdev=17.66, samples=20 00:35:29.455 lat (usec) : 750=45.26%, 1000=0.40% 00:35:29.455 lat (msec) : 50=54.34% 00:35:29.455 cpu : usr=88.22%, sys=10.87%, ctx=36, majf=0, minf=243 00:35:29.455 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:29.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:29.455 issued rwts: total=1752,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:29.455 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:29.455 00:35:29.455 Run status group 0 (all jobs): 00:35:29.455 READ: bw=699KiB/s (716kB/s), 699KiB/s-699KiB/s (716kB/s-716kB/s), io=7008KiB (7176kB), run=10021-10021msec 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.455 00:35:29.455 real 0m10.993s 00:35:29.455 user 0m9.955s 00:35:29.455 sys 0m1.344s 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:35:29.455 ************************************ 00:35:29.455 END TEST fio_dif_1_default 00:35:29.455 ************************************ 00:35:29.455 21:42:02 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:29.455 21:42:02 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:35:29.455 21:42:02 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:29.455 21:42:02 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:29.455 21:42:02 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:29.455 ************************************ 00:35:29.455 START TEST fio_dif_1_multi_subsystems 00:35:29.455 ************************************ 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.455 bdev_null0 00:35:29.455 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 [2024-07-11 21:42:02.526003] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 bdev_null1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:29.456 { 00:35:29.456 "params": { 00:35:29.456 "name": "Nvme$subsystem", 00:35:29.456 "trtype": "$TEST_TRANSPORT", 00:35:29.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.456 "adrfam": "ipv4", 00:35:29.456 "trsvcid": "$NVMF_PORT", 00:35:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.456 "hdgst": ${hdgst:-false}, 00:35:29.456 "ddgst": ${ddgst:-false} 00:35:29.456 }, 00:35:29.456 "method": "bdev_nvme_attach_controller" 00:35:29.456 } 00:35:29.456 EOF 00:35:29.456 )") 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:29.456 { 00:35:29.456 "params": { 00:35:29.456 "name": "Nvme$subsystem", 00:35:29.456 "trtype": "$TEST_TRANSPORT", 00:35:29.456 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:29.456 "adrfam": "ipv4", 00:35:29.456 "trsvcid": "$NVMF_PORT", 00:35:29.456 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:29.456 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:29.456 "hdgst": ${hdgst:-false}, 00:35:29.456 "ddgst": ${ddgst:-false} 00:35:29.456 }, 00:35:29.456 "method": "bdev_nvme_attach_controller" 00:35:29.456 } 00:35:29.456 EOF 00:35:29.456 )") 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.456 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:29.457 "params": { 00:35:29.457 "name": "Nvme0", 00:35:29.457 "trtype": "tcp", 00:35:29.457 "traddr": "10.0.0.2", 00:35:29.457 "adrfam": "ipv4", 00:35:29.457 "trsvcid": "4420", 00:35:29.457 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:29.457 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:29.457 "hdgst": false, 00:35:29.457 "ddgst": false 00:35:29.457 }, 00:35:29.457 "method": "bdev_nvme_attach_controller" 00:35:29.457 },{ 00:35:29.457 "params": { 00:35:29.457 "name": "Nvme1", 00:35:29.457 "trtype": "tcp", 00:35:29.457 "traddr": "10.0.0.2", 00:35:29.457 "adrfam": "ipv4", 00:35:29.457 "trsvcid": "4420", 00:35:29.457 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:29.457 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:29.457 "hdgst": false, 00:35:29.457 "ddgst": false 00:35:29.457 }, 00:35:29.457 "method": "bdev_nvme_attach_controller" 00:35:29.457 }' 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:29.457 21:42:02 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:29.457 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:29.457 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:35:29.457 fio-3.35 00:35:29.457 Starting 2 threads 00:35:29.457 EAL: No free 2048 kB hugepages reported on node 1 00:35:39.427 00:35:39.427 filename0: (groupid=0, jobs=1): err= 0: pid=1074322: Thu Jul 11 21:42:13 2024 00:35:39.427 read: IOPS=96, BW=384KiB/s (394kB/s)(3856KiB/10032msec) 00:35:39.427 slat (usec): min=8, max=128, avg=10.44, stdev= 5.01 00:35:39.427 clat (usec): min=40889, max=46896, avg=41589.09, stdev=601.90 00:35:39.427 lat (usec): min=40897, max=46941, avg=41599.53, stdev=602.75 00:35:39.427 clat percentiles (usec): 00:35:39.427 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:35:39.427 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41681], 60.00th=[42206], 00:35:39.427 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:39.427 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:35:39.427 | 99.99th=[46924] 00:35:39.427 bw ( KiB/s): min= 352, max= 416, per=50.06%, avg=384.00, stdev=10.38, samples=20 00:35:39.427 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:35:39.427 lat (msec) : 50=100.00% 00:35:39.427 cpu : usr=94.16%, sys=5.49%, ctx=24, majf=0, minf=134 00:35:39.427 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.427 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.427 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:39.427 filename1: (groupid=0, jobs=1): err= 0: pid=1074323: Thu Jul 11 21:42:13 2024 00:35:39.427 read: IOPS=95, BW=383KiB/s (392kB/s)(3840KiB/10031msec) 00:35:39.427 slat (usec): min=7, max=131, avg=10.10, stdev= 4.88 00:35:39.427 clat (usec): min=614, max=46892, avg=41764.12, stdev=2692.14 00:35:39.427 lat (usec): min=622, max=46936, avg=41774.22, stdev=2692.08 00:35:39.427 clat percentiles (usec): 00:35:39.427 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41681], 20.00th=[41681], 00:35:39.427 | 30.00th=[41681], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:35:39.427 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:35:39.427 | 99.00th=[42206], 99.50th=[42730], 99.90th=[46924], 99.95th=[46924], 00:35:39.427 | 99.99th=[46924] 00:35:39.427 bw ( KiB/s): min= 352, max= 416, per=49.80%, avg=382.40, stdev=12.61, samples=20 00:35:39.427 iops : min= 88, max= 104, avg=95.60, stdev= 3.15, samples=20 00:35:39.427 lat (usec) : 750=0.42% 00:35:39.427 lat (msec) : 50=99.58% 00:35:39.427 cpu : usr=93.76%, sys=5.61%, ctx=33, majf=0, minf=234 00:35:39.427 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:39.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:39.427 issued rwts: total=960,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:39.427 latency : target=0, window=0, percentile=100.00%, depth=4 00:35:39.427 00:35:39.427 Run status group 0 (all jobs): 00:35:39.427 READ: bw=767KiB/s (786kB/s), 383KiB/s-384KiB/s (392kB/s-394kB/s), io=7696KiB (7881kB), run=10031-10032msec 00:35:39.427 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 00:35:39.428 real 0m11.361s 00:35:39.428 user 0m20.133s 00:35:39.428 sys 0m1.455s 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 ************************************ 00:35:39.428 END TEST fio_dif_1_multi_subsystems 00:35:39.428 ************************************ 00:35:39.428 21:42:13 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:35:39.428 21:42:13 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:35:39.428 21:42:13 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:35:39.428 21:42:13 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 ************************************ 00:35:39.428 START TEST fio_dif_rand_params 00:35:39.428 ************************************ 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 bdev_null0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:39.428 [2024-07-11 21:42:13.940391] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:39.428 { 00:35:39.428 "params": { 00:35:39.428 "name": "Nvme$subsystem", 00:35:39.428 "trtype": "$TEST_TRANSPORT", 00:35:39.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:39.428 "adrfam": "ipv4", 00:35:39.428 "trsvcid": "$NVMF_PORT", 00:35:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:39.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:39.428 "hdgst": ${hdgst:-false}, 00:35:39.428 "ddgst": ${ddgst:-false} 00:35:39.428 }, 00:35:39.428 "method": "bdev_nvme_attach_controller" 00:35:39.428 } 00:35:39.428 EOF 00:35:39.428 )") 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:39.428 "params": { 00:35:39.428 "name": "Nvme0", 00:35:39.428 "trtype": "tcp", 00:35:39.428 "traddr": "10.0.0.2", 00:35:39.428 "adrfam": "ipv4", 00:35:39.428 "trsvcid": "4420", 00:35:39.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:39.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:39.428 "hdgst": false, 00:35:39.428 "ddgst": false 00:35:39.428 }, 00:35:39.428 "method": "bdev_nvme_attach_controller" 00:35:39.428 }' 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:39.428 21:42:13 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:39.428 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:35:39.428 ... 00:35:39.428 fio-3.35 00:35:39.428 Starting 3 threads 00:35:39.686 EAL: No free 2048 kB hugepages reported on node 1 00:35:46.236 00:35:46.236 filename0: (groupid=0, jobs=1): err= 0: pid=1076226: Thu Jul 11 21:42:19 2024 00:35:46.236 read: IOPS=207, BW=26.0MiB/s (27.2MB/s)(131MiB/5047msec) 00:35:46.236 slat (nsec): min=4688, max=43068, avg=17802.84, stdev=5316.93 00:35:46.236 clat (usec): min=5308, max=57025, avg=14366.78, stdev=9935.72 00:35:46.236 lat (usec): min=5321, max=57044, avg=14384.58, stdev=9935.64 00:35:46.236 clat percentiles (usec): 00:35:46.236 | 1.00th=[ 5604], 5.00th=[ 7373], 10.00th=[ 8455], 20.00th=[ 9503], 00:35:46.236 | 30.00th=[10814], 40.00th=[11600], 50.00th=[12387], 60.00th=[13042], 00:35:46.236 | 70.00th=[13829], 80.00th=[14615], 90.00th=[16319], 95.00th=[49021], 00:35:46.236 | 99.00th=[53740], 99.50th=[54264], 99.90th=[55837], 99.95th=[56886], 00:35:46.236 | 99.99th=[56886] 00:35:46.236 bw ( KiB/s): min=21504, max=32000, per=31.33%, avg=26803.20, stdev=3024.34, samples=10 00:35:46.236 iops : min= 168, max= 250, avg=209.40, stdev=23.63, samples=10 00:35:46.236 lat (msec) : 10=23.16%, 20=70.54%, 50=2.00%, 100=4.29% 00:35:46.236 cpu : usr=94.07%, sys=5.47%, ctx=10, majf=0, minf=62 00:35:46.236 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.236 issued rwts: total=1049,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.236 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:46.236 filename0: (groupid=0, jobs=1): err= 0: pid=1076227: Thu Jul 11 21:42:19 2024 00:35:46.236 read: IOPS=220, BW=27.5MiB/s (28.9MB/s)(138MiB/5006msec) 00:35:46.236 slat (nsec): min=4718, max=85092, avg=16448.62, stdev=4782.43 00:35:46.236 clat (usec): min=5060, max=90095, avg=13603.55, stdev=9339.67 00:35:46.236 lat (usec): min=5073, max=90109, avg=13619.99, stdev=9339.63 00:35:46.236 clat percentiles (usec): 00:35:46.236 | 1.00th=[ 5735], 5.00th=[ 7701], 10.00th=[ 8356], 20.00th=[ 9241], 00:35:46.236 | 30.00th=[10421], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:35:46.236 | 70.00th=[13173], 80.00th=[13960], 90.00th=[15008], 95.00th=[45876], 00:35:46.236 | 99.00th=[54789], 99.50th=[56361], 99.90th=[57934], 99.95th=[89654], 00:35:46.236 | 99.99th=[89654] 00:35:46.236 bw ( KiB/s): min=21760, max=32000, per=32.90%, avg=28140.00, stdev=2890.00, samples=10 00:35:46.236 iops : min= 170, max= 250, avg=219.80, stdev=22.58, samples=10 00:35:46.236 lat (msec) : 10=25.86%, 20=69.06%, 50=1.45%, 100=3.63% 00:35:46.236 cpu : usr=94.51%, sys=5.01%, ctx=11, majf=0, minf=187 00:35:46.236 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.236 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.236 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.236 issued rwts: total=1102,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.236 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:46.236 filename0: (groupid=0, jobs=1): err= 0: pid=1076228: Thu Jul 11 21:42:19 2024 00:35:46.236 read: IOPS=244, BW=30.5MiB/s (32.0MB/s)(153MiB/5004msec) 00:35:46.236 slat (nsec): min=5041, max=43061, avg=17874.75, stdev=4757.48 00:35:46.236 clat (usec): min=4693, max=90825, avg=12261.37, stdev=8535.77 00:35:46.236 lat (usec): min=4707, max=90846, avg=12279.24, stdev=8535.89 00:35:46.236 clat percentiles (usec): 00:35:46.236 | 1.00th=[ 5080], 5.00th=[ 6521], 10.00th=[ 7635], 20.00th=[ 8455], 00:35:46.237 | 30.00th=[ 9372], 40.00th=[10552], 50.00th=[11338], 60.00th=[11863], 00:35:46.237 | 70.00th=[12387], 80.00th=[13042], 90.00th=[14222], 95.00th=[15795], 00:35:46.237 | 99.00th=[52691], 99.50th=[53740], 99.90th=[90702], 99.95th=[90702], 00:35:46.237 | 99.99th=[90702] 00:35:46.237 bw ( KiB/s): min=26112, max=37632, per=36.50%, avg=31225.50, stdev=3670.35, samples=10 00:35:46.237 iops : min= 204, max= 294, avg=243.90, stdev=28.66, samples=10 00:35:46.237 lat (msec) : 10=35.11%, 20=61.54%, 50=1.06%, 100=2.29% 00:35:46.237 cpu : usr=92.30%, sys=6.38%, ctx=328, majf=0, minf=91 00:35:46.237 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:35:46.237 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.237 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:46.237 issued rwts: total=1222,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:46.237 latency : target=0, window=0, percentile=100.00%, depth=3 00:35:46.237 00:35:46.237 Run status group 0 (all jobs): 00:35:46.237 READ: bw=83.5MiB/s (87.6MB/s), 26.0MiB/s-30.5MiB/s (27.2MB/s-32.0MB/s), io=422MiB (442MB), run=5004-5047msec 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 bdev_null0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 [2024-07-11 21:42:20.173155] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 bdev_null1 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 bdev_null2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.237 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.237 { 00:35:46.237 "params": { 00:35:46.237 "name": "Nvme$subsystem", 00:35:46.237 "trtype": "$TEST_TRANSPORT", 00:35:46.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.237 "adrfam": "ipv4", 00:35:46.237 "trsvcid": "$NVMF_PORT", 00:35:46.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.237 "hdgst": ${hdgst:-false}, 00:35:46.237 "ddgst": ${ddgst:-false} 00:35:46.237 }, 00:35:46.238 "method": "bdev_nvme_attach_controller" 00:35:46.238 } 00:35:46.238 EOF 00:35:46.238 )") 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.238 { 00:35:46.238 "params": { 00:35:46.238 "name": "Nvme$subsystem", 00:35:46.238 "trtype": "$TEST_TRANSPORT", 00:35:46.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.238 "adrfam": "ipv4", 00:35:46.238 "trsvcid": "$NVMF_PORT", 00:35:46.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.238 "hdgst": ${hdgst:-false}, 00:35:46.238 "ddgst": ${ddgst:-false} 00:35:46.238 }, 00:35:46.238 "method": "bdev_nvme_attach_controller" 00:35:46.238 } 00:35:46.238 EOF 00:35:46.238 )") 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:46.238 { 00:35:46.238 "params": { 00:35:46.238 "name": "Nvme$subsystem", 00:35:46.238 "trtype": "$TEST_TRANSPORT", 00:35:46.238 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:46.238 "adrfam": "ipv4", 00:35:46.238 "trsvcid": "$NVMF_PORT", 00:35:46.238 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:46.238 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:46.238 "hdgst": ${hdgst:-false}, 00:35:46.238 "ddgst": ${ddgst:-false} 00:35:46.238 }, 00:35:46.238 "method": "bdev_nvme_attach_controller" 00:35:46.238 } 00:35:46.238 EOF 00:35:46.238 )") 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:46.238 "params": { 00:35:46.238 "name": "Nvme0", 00:35:46.238 "trtype": "tcp", 00:35:46.238 "traddr": "10.0.0.2", 00:35:46.238 "adrfam": "ipv4", 00:35:46.238 "trsvcid": "4420", 00:35:46.238 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:46.238 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:46.238 "hdgst": false, 00:35:46.238 "ddgst": false 00:35:46.238 }, 00:35:46.238 "method": "bdev_nvme_attach_controller" 00:35:46.238 },{ 00:35:46.238 "params": { 00:35:46.238 "name": "Nvme1", 00:35:46.238 "trtype": "tcp", 00:35:46.238 "traddr": "10.0.0.2", 00:35:46.238 "adrfam": "ipv4", 00:35:46.238 "trsvcid": "4420", 00:35:46.238 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:46.238 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:46.238 "hdgst": false, 00:35:46.238 "ddgst": false 00:35:46.238 }, 00:35:46.238 "method": "bdev_nvme_attach_controller" 00:35:46.238 },{ 00:35:46.238 "params": { 00:35:46.238 "name": "Nvme2", 00:35:46.238 "trtype": "tcp", 00:35:46.238 "traddr": "10.0.0.2", 00:35:46.238 "adrfam": "ipv4", 00:35:46.238 "trsvcid": "4420", 00:35:46.238 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:35:46.238 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:35:46.238 "hdgst": false, 00:35:46.238 "ddgst": false 00:35:46.238 }, 00:35:46.238 "method": "bdev_nvme_attach_controller" 00:35:46.238 }' 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:46.238 21:42:20 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:46.238 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:46.238 ... 00:35:46.238 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:46.238 ... 00:35:46.238 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:35:46.238 ... 00:35:46.238 fio-3.35 00:35:46.238 Starting 24 threads 00:35:46.238 EAL: No free 2048 kB hugepages reported on node 1 00:35:58.448 00:35:58.448 filename0: (groupid=0, jobs=1): err= 0: pid=1077074: Thu Jul 11 21:42:31 2024 00:35:58.448 read: IOPS=144, BW=578KiB/s (592kB/s)(5848KiB/10116msec) 00:35:58.448 slat (nsec): min=8524, max=82586, avg=31415.93, stdev=17995.82 00:35:58.448 clat (msec): min=19, max=391, avg=109.99, stdev=112.01 00:35:58.448 lat (msec): min=19, max=391, avg=110.02, stdev=112.00 00:35:58.448 clat percentiles (msec): 00:35:58.448 | 1.00th=[ 23], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.448 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.448 | 70.00th=[ 249], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.448 | 99.00th=[ 309], 99.50th=[ 393], 99.90th=[ 393], 99.95th=[ 393], 00:35:58.448 | 99.99th=[ 393] 00:35:58.448 bw ( KiB/s): min= 144, max= 1920, per=4.28%, avg=578.40, stdev=675.92, samples=20 00:35:58.448 iops : min= 36, max= 480, avg=144.60, stdev=168.98, samples=20 00:35:58.448 lat (msec) : 20=0.62%, 50=67.24%, 250=2.19%, 500=29.96% 00:35:58.448 cpu : usr=97.84%, sys=1.44%, ctx=88, majf=0, minf=30 00:35:58.448 IO depths : 1=4.3%, 2=9.6%, 4=22.0%, 8=55.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:58.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.448 complete : 0=0.0%, 4=93.2%, 8=1.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.448 issued rwts: total=1462,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.448 filename0: (groupid=0, jobs=1): err= 0: pid=1077076: Thu Jul 11 21:42:31 2024 00:35:58.448 read: IOPS=145, BW=582KiB/s (596kB/s)(5888KiB/10116msec) 00:35:58.448 slat (usec): min=8, max=111, avg=31.42, stdev=19.48 00:35:58.448 clat (msec): min=19, max=306, avg=109.65, stdev=109.87 00:35:58.448 lat (msec): min=19, max=306, avg=109.68, stdev=109.86 00:35:58.448 clat percentiles (msec): 00:35:58.448 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.448 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.448 | 70.00th=[ 245], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.448 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:35:58.448 | 99.99th=[ 309] 00:35:58.448 bw ( KiB/s): min= 128, max= 1920, per=4.31%, avg=582.40, stdev=674.09, samples=20 00:35:58.448 iops : min= 32, max= 480, avg=145.60, stdev=168.52, samples=20 00:35:58.448 lat (msec) : 20=0.54%, 50=66.85%, 250=4.35%, 500=28.26% 00:35:58.448 cpu : usr=98.42%, sys=1.17%, ctx=20, majf=0, minf=32 00:35:58.448 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:58.448 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.448 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.448 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.448 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.448 filename0: (groupid=0, jobs=1): err= 0: pid=1077077: Thu Jul 11 21:42:31 2024 00:35:58.448 read: IOPS=144, BW=576KiB/s (590kB/s)(5832KiB/10117msec) 00:35:58.448 slat (usec): min=8, max=103, avg=30.61, stdev=16.99 00:35:58.448 clat (msec): min=18, max=446, avg=110.08, stdev=113.04 00:35:58.448 lat (msec): min=18, max=446, avg=110.11, stdev=113.03 00:35:58.448 clat percentiles (msec): 00:35:58.448 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.448 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 249], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.449 | 99.00th=[ 359], 99.50th=[ 405], 99.90th=[ 447], 99.95th=[ 447], 00:35:58.449 | 99.99th=[ 447] 00:35:58.449 bw ( KiB/s): min= 176, max= 1920, per=4.27%, avg=576.80, stdev=676.72, samples=20 00:35:58.449 iops : min= 44, max= 480, avg=144.20, stdev=169.18, samples=20 00:35:58.449 lat (msec) : 20=0.14%, 50=67.90%, 250=2.33%, 500=29.63% 00:35:58.449 cpu : usr=98.13%, sys=1.23%, ctx=58, majf=0, minf=32 00:35:58.449 IO depths : 1=4.3%, 2=8.8%, 4=19.5%, 8=59.1%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1458,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename0: (groupid=0, jobs=1): err= 0: pid=1077078: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=147, BW=590KiB/s (604kB/s)(5976KiB/10132msec) 00:35:58.449 slat (usec): min=4, max=109, avg=46.50, stdev=27.50 00:35:58.449 clat (msec): min=8, max=485, avg=107.75, stdev=112.17 00:35:58.449 lat (msec): min=8, max=485, avg=107.80, stdev=112.15 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 13], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 247], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 279], 00:35:58.449 | 99.00th=[ 305], 99.50th=[ 418], 99.90th=[ 485], 99.95th=[ 485], 00:35:58.449 | 99.99th=[ 485] 00:35:58.449 bw ( KiB/s): min= 176, max= 2052, per=4.38%, avg=591.40, stdev=689.38, samples=20 00:35:58.449 iops : min= 44, max= 513, avg=147.85, stdev=172.35, samples=20 00:35:58.449 lat (msec) : 10=0.94%, 20=1.94%, 50=65.66%, 250=2.01%, 500=29.45% 00:35:58.449 cpu : usr=98.18%, sys=1.40%, ctx=18, majf=0, minf=25 00:35:58.449 IO depths : 1=4.1%, 2=8.6%, 4=19.5%, 8=59.4%, 16=8.4%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1494,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename0: (groupid=0, jobs=1): err= 0: pid=1077079: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=133, BW=532KiB/s (545kB/s)(5376KiB/10103msec) 00:35:58.449 slat (nsec): min=6010, max=81698, avg=22627.57, stdev=9970.07 00:35:58.449 clat (msec): min=32, max=556, avg=119.28, stdev=140.82 00:35:58.449 lat (msec): min=32, max=556, avg=119.30, stdev=140.82 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 37], 80.00th=[ 271], 90.00th=[ 355], 95.00th=[ 435], 00:35:58.449 | 99.00th=[ 464], 99.50th=[ 514], 99.90th=[ 558], 99.95th=[ 558], 00:35:58.449 | 99.99th=[ 558] 00:35:58.449 bw ( KiB/s): min= 112, max= 1920, per=3.93%, avg=531.20, stdev=671.01, samples=20 00:35:58.449 iops : min= 28, max= 480, avg=132.80, stdev=167.75, samples=20 00:35:58.449 lat (msec) : 50=70.24%, 100=1.19%, 250=1.19%, 500=26.64%, 750=0.74% 00:35:58.449 cpu : usr=98.33%, sys=1.27%, ctx=34, majf=0, minf=29 00:35:58.449 IO depths : 1=5.0%, 2=11.2%, 4=25.0%, 8=51.3%, 16=7.5%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1344,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename0: (groupid=0, jobs=1): err= 0: pid=1077080: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=126, BW=507KiB/s (519kB/s)(5120KiB/10101msec) 00:35:58.449 slat (nsec): min=8495, max=58288, avg=19642.97, stdev=8227.58 00:35:58.449 clat (msec): min=32, max=551, avg=125.24, stdev=158.17 00:35:58.449 lat (msec): min=32, max=551, avg=125.26, stdev=158.16 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 37], 80.00th=[ 347], 90.00th=[ 430], 95.00th=[ 439], 00:35:58.449 | 99.00th=[ 464], 99.50th=[ 464], 99.90th=[ 550], 99.95th=[ 550], 00:35:58.449 | 99.99th=[ 550] 00:35:58.449 bw ( KiB/s): min= 128, max= 1920, per=3.74%, avg=505.60, stdev=683.97, samples=20 00:35:58.449 iops : min= 32, max= 480, avg=126.40, stdev=170.99, samples=20 00:35:58.449 lat (msec) : 50=72.50%, 100=2.50%, 500=24.53%, 750=0.47% 00:35:58.449 cpu : usr=98.37%, sys=1.12%, ctx=48, majf=0, minf=32 00:35:58.449 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename0: (groupid=0, jobs=1): err= 0: pid=1077081: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=149, BW=600KiB/s (614kB/s)(6080KiB/10141msec) 00:35:58.449 slat (usec): min=5, max=121, avg=49.41, stdev=28.69 00:35:58.449 clat (msec): min=3, max=367, avg=106.25, stdev=108.46 00:35:58.449 lat (msec): min=3, max=367, avg=106.30, stdev=108.43 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 5], 5.00th=[ 32], 10.00th=[ 33], 20.00th=[ 33], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 239], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 275], 00:35:58.449 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 368], 99.95th=[ 368], 00:35:58.449 | 99.99th=[ 368] 00:35:58.449 bw ( KiB/s): min= 144, max= 2048, per=4.45%, avg=601.60, stdev=699.71, samples=20 00:35:58.449 iops : min= 36, max= 512, avg=150.40, stdev=174.93, samples=20 00:35:58.449 lat (msec) : 4=0.99%, 10=0.07%, 20=2.11%, 50=64.21%, 100=1.05% 00:35:58.449 lat (msec) : 250=5.00%, 500=26.58% 00:35:58.449 cpu : usr=97.79%, sys=1.57%, ctx=37, majf=0, minf=20 00:35:58.449 IO depths : 1=4.3%, 2=10.5%, 4=24.7%, 8=52.3%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename0: (groupid=0, jobs=1): err= 0: pid=1077082: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=140, BW=563KiB/s (576kB/s)(5688KiB/10111msec) 00:35:58.449 slat (nsec): min=8463, max=78415, avg=23102.73, stdev=16455.33 00:35:58.449 clat (msec): min=27, max=428, avg=113.45, stdev=116.11 00:35:58.449 lat (msec): min=27, max=428, avg=113.47, stdev=116.10 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 241], 80.00th=[ 264], 90.00th=[ 279], 95.00th=[ 288], 00:35:58.449 | 99.00th=[ 393], 99.50th=[ 430], 99.90th=[ 430], 99.95th=[ 430], 00:35:58.449 | 99.99th=[ 430] 00:35:58.449 bw ( KiB/s): min= 128, max= 1920, per=4.16%, avg=562.40, stdev=653.61, samples=20 00:35:58.449 iops : min= 32, max= 480, avg=140.60, stdev=163.40, samples=20 00:35:58.449 lat (msec) : 50=66.39%, 100=1.13%, 250=4.36%, 500=28.13% 00:35:58.449 cpu : usr=98.27%, sys=1.34%, ctx=18, majf=0, minf=42 00:35:58.449 IO depths : 1=4.7%, 2=11.0%, 4=25.0%, 8=51.5%, 16=7.7%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1422,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename1: (groupid=0, jobs=1): err= 0: pid=1077083: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=128, BW=512KiB/s (524kB/s)(5176KiB/10108msec) 00:35:58.449 slat (nsec): min=8866, max=89490, avg=36216.12, stdev=17429.89 00:35:58.449 clat (msec): min=32, max=537, avg=124.53, stdev=157.35 00:35:58.449 lat (msec): min=32, max=537, avg=124.56, stdev=157.34 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:58.449 | 70.00th=[ 37], 80.00th=[ 351], 90.00th=[ 426], 95.00th=[ 443], 00:35:58.449 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:35:58.449 | 99.99th=[ 542] 00:35:58.449 bw ( KiB/s): min= 128, max= 1920, per=3.79%, avg=511.35, stdev=681.71, samples=20 00:35:58.449 iops : min= 32, max= 480, avg=127.80, stdev=170.36, samples=20 00:35:58.449 lat (msec) : 50=72.95%, 100=1.24%, 250=2.32%, 500=22.26%, 750=1.24% 00:35:58.449 cpu : usr=97.67%, sys=1.58%, ctx=52, majf=0, minf=24 00:35:58.449 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:35:58.449 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.449 issued rwts: total=1294,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.449 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.449 filename1: (groupid=0, jobs=1): err= 0: pid=1077084: Thu Jul 11 21:42:31 2024 00:35:58.449 read: IOPS=145, BW=580KiB/s (594kB/s)(5880KiB/10133msec) 00:35:58.449 slat (nsec): min=8449, max=99114, avg=29034.52, stdev=17189.24 00:35:58.449 clat (msec): min=8, max=474, avg=109.38, stdev=119.59 00:35:58.449 lat (msec): min=8, max=474, avg=109.40, stdev=119.58 00:35:58.449 clat percentiles (msec): 00:35:58.449 | 1.00th=[ 10], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.449 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.449 | 70.00th=[ 239], 80.00th=[ 262], 90.00th=[ 279], 95.00th=[ 300], 00:35:58.449 | 99.00th=[ 435], 99.50th=[ 439], 99.90th=[ 477], 99.95th=[ 477], 00:35:58.449 | 99.99th=[ 477] 00:35:58.449 bw ( KiB/s): min= 176, max= 2048, per=4.30%, avg=581.60, stdev=694.12, samples=20 00:35:58.450 iops : min= 44, max= 512, avg=145.40, stdev=173.53, samples=20 00:35:58.450 lat (msec) : 10=1.09%, 20=1.09%, 50=67.48%, 250=4.08%, 500=26.26% 00:35:58.450 cpu : usr=98.45%, sys=1.13%, ctx=15, majf=0, minf=42 00:35:58.450 IO depths : 1=4.4%, 2=9.2%, 4=20.2%, 8=57.9%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=92.6%, 8=1.9%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename1: (groupid=0, jobs=1): err= 0: pid=1077085: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=145, BW=582KiB/s (596kB/s)(5888KiB/10117msec) 00:35:58.450 slat (nsec): min=8387, max=96038, avg=31218.02, stdev=19501.27 00:35:58.450 clat (msec): min=19, max=306, avg=109.66, stdev=109.88 00:35:58.450 lat (msec): min=19, max=306, avg=109.69, stdev=109.87 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.450 | 70.00th=[ 245], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.450 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:35:58.450 | 99.99th=[ 309] 00:35:58.450 bw ( KiB/s): min= 128, max= 1920, per=4.31%, avg=582.40, stdev=674.09, samples=20 00:35:58.450 iops : min= 32, max= 480, avg=145.60, stdev=168.52, samples=20 00:35:58.450 lat (msec) : 20=0.61%, 50=66.78%, 250=4.35%, 500=28.26% 00:35:58.450 cpu : usr=98.46%, sys=1.13%, ctx=19, majf=0, minf=27 00:35:58.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename1: (groupid=0, jobs=1): err= 0: pid=1077086: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=126, BW=507KiB/s (519kB/s)(5120KiB/10105msec) 00:35:58.450 slat (usec): min=8, max=114, avg=53.45, stdev=27.44 00:35:58.450 clat (msec): min=31, max=547, avg=125.77, stdev=159.36 00:35:58.450 lat (msec): min=32, max=547, avg=125.83, stdev=159.34 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:58.450 | 70.00th=[ 37], 80.00th=[ 351], 90.00th=[ 430], 95.00th=[ 443], 00:35:58.450 | 99.00th=[ 464], 99.50th=[ 527], 99.90th=[ 550], 99.95th=[ 550], 00:35:58.450 | 99.99th=[ 550] 00:35:58.450 bw ( KiB/s): min= 128, max= 1920, per=3.74%, avg=505.75, stdev=684.35, samples=20 00:35:58.450 iops : min= 32, max= 480, avg=126.40, stdev=171.03, samples=20 00:35:58.450 lat (msec) : 50=72.50%, 100=2.50%, 500=24.38%, 750=0.62% 00:35:58.450 cpu : usr=98.30%, sys=1.30%, ctx=14, majf=0, minf=33 00:35:58.450 IO depths : 1=5.9%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1280,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename1: (groupid=0, jobs=1): err= 0: pid=1077087: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=145, BW=582KiB/s (596kB/s)(5888KiB/10116msec) 00:35:58.450 slat (nsec): min=8469, max=87088, avg=23758.11, stdev=17628.13 00:35:58.450 clat (msec): min=19, max=306, avg=109.73, stdev=109.81 00:35:58.450 lat (msec): min=19, max=306, avg=109.75, stdev=109.80 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 21], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.450 | 70.00th=[ 245], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.450 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 309], 99.95th=[ 309], 00:35:58.450 | 99.99th=[ 309] 00:35:58.450 bw ( KiB/s): min= 128, max= 1920, per=4.31%, avg=582.40, stdev=674.09, samples=20 00:35:58.450 iops : min= 32, max= 480, avg=145.60, stdev=168.52, samples=20 00:35:58.450 lat (msec) : 20=0.75%, 50=66.64%, 250=4.35%, 500=28.26% 00:35:58.450 cpu : usr=98.13%, sys=1.45%, ctx=26, majf=0, minf=36 00:35:58.450 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1472,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename1: (groupid=0, jobs=1): err= 0: pid=1077088: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=128, BW=513KiB/s (525kB/s)(5184KiB/10106msec) 00:35:58.450 slat (nsec): min=8687, max=79881, avg=26161.06, stdev=8595.80 00:35:58.450 clat (msec): min=32, max=531, avg=124.52, stdev=156.49 00:35:58.450 lat (msec): min=32, max=531, avg=124.54, stdev=156.49 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.450 | 70.00th=[ 37], 80.00th=[ 351], 90.00th=[ 422], 95.00th=[ 443], 00:35:58.450 | 99.00th=[ 464], 99.50th=[ 514], 99.90th=[ 531], 99.95th=[ 531], 00:35:58.450 | 99.99th=[ 531] 00:35:58.450 bw ( KiB/s): min= 128, max= 1920, per=3.79%, avg=512.15, stdev=681.39, samples=20 00:35:58.450 iops : min= 32, max= 480, avg=128.00, stdev=170.28, samples=20 00:35:58.450 lat (msec) : 50=72.84%, 100=1.23%, 250=2.47%, 500=22.69%, 750=0.77% 00:35:58.450 cpu : usr=98.44%, sys=1.15%, ctx=18, majf=0, minf=25 00:35:58.450 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename1: (groupid=0, jobs=1): err= 0: pid=1077089: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=148, BW=593KiB/s (608kB/s)(6016KiB/10140msec) 00:35:58.450 slat (usec): min=6, max=145, avg=20.21, stdev=12.96 00:35:58.450 clat (msec): min=10, max=382, avg=107.63, stdev=108.35 00:35:58.450 lat (msec): min=10, max=382, avg=107.65, stdev=108.34 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 12], 5.00th=[ 33], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.450 | 70.00th=[ 239], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 279], 00:35:58.450 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 384], 99.95th=[ 384], 00:35:58.450 | 99.99th=[ 384] 00:35:58.450 bw ( KiB/s): min= 128, max= 1923, per=4.41%, avg=595.35, stdev=686.68, samples=20 00:35:58.450 iops : min= 32, max= 480, avg=148.80, stdev=171.59, samples=20 00:35:58.450 lat (msec) : 20=2.13%, 50=64.89%, 100=1.06%, 250=5.19%, 500=26.73% 00:35:58.450 cpu : usr=97.16%, sys=1.62%, ctx=158, majf=0, minf=26 00:35:58.450 IO depths : 1=4.5%, 2=10.6%, 4=24.5%, 8=52.4%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1504,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename1: (groupid=0, jobs=1): err= 0: pid=1077090: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=140, BW=563KiB/s (577kB/s)(5696KiB/10113msec) 00:35:58.450 slat (usec): min=8, max=112, avg=44.87, stdev=25.98 00:35:58.450 clat (msec): min=31, max=434, avg=113.19, stdev=115.77 00:35:58.450 lat (msec): min=31, max=434, avg=113.23, stdev=115.75 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.450 | 70.00th=[ 241], 80.00th=[ 266], 90.00th=[ 279], 95.00th=[ 284], 00:35:58.450 | 99.00th=[ 435], 99.50th=[ 435], 99.90th=[ 435], 99.95th=[ 435], 00:35:58.450 | 99.99th=[ 435] 00:35:58.450 bw ( KiB/s): min= 128, max= 1920, per=4.17%, avg=563.20, stdev=652.75, samples=20 00:35:58.450 iops : min= 32, max= 480, avg=140.80, stdev=163.19, samples=20 00:35:58.450 lat (msec) : 50=66.29%, 100=1.12%, 250=3.51%, 500=29.07% 00:35:58.450 cpu : usr=98.35%, sys=1.19%, ctx=23, majf=0, minf=39 00:35:58.450 IO depths : 1=4.9%, 2=11.1%, 4=24.8%, 8=51.6%, 16=7.6%, 32=0.0%, >=64=0.0% 00:35:58.450 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.450 issued rwts: total=1424,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.450 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.450 filename2: (groupid=0, jobs=1): err= 0: pid=1077091: Thu Jul 11 21:42:31 2024 00:35:58.450 read: IOPS=149, BW=599KiB/s (614kB/s)(6080KiB/10142msec) 00:35:58.450 slat (nsec): min=7644, max=86955, avg=11684.72, stdev=4781.11 00:35:58.450 clat (msec): min=8, max=338, avg=106.57, stdev=108.19 00:35:58.450 lat (msec): min=8, max=338, avg=106.58, stdev=108.19 00:35:58.450 clat percentiles (msec): 00:35:58.450 | 1.00th=[ 9], 5.00th=[ 27], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.450 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.450 | 70.00th=[ 239], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 279], 00:35:58.450 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 338], 99.95th=[ 338], 00:35:58.450 | 99.99th=[ 338] 00:35:58.450 bw ( KiB/s): min= 144, max= 2052, per=4.45%, avg=601.80, stdev=699.68, samples=20 00:35:58.450 iops : min= 36, max= 513, avg=150.45, stdev=174.92, samples=20 00:35:58.451 lat (msec) : 10=1.05%, 20=1.05%, 50=65.26%, 100=1.05%, 250=4.87% 00:35:58.451 lat (msec) : 500=26.71% 00:35:58.451 cpu : usr=97.97%, sys=1.51%, ctx=23, majf=0, minf=27 00:35:58.451 IO depths : 1=4.2%, 2=10.3%, 4=24.5%, 8=52.7%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:58.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 issued rwts: total=1520,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.451 filename2: (groupid=0, jobs=1): err= 0: pid=1077092: Thu Jul 11 21:42:31 2024 00:35:58.451 read: IOPS=142, BW=570KiB/s (584kB/s)(5768KiB/10113msec) 00:35:58.451 slat (nsec): min=6596, max=58813, avg=19583.26, stdev=9048.34 00:35:58.451 clat (msec): min=32, max=423, avg=111.93, stdev=111.42 00:35:58.451 lat (msec): min=32, max=423, avg=111.95, stdev=111.41 00:35:58.451 clat percentiles (msec): 00:35:58.451 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.451 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 36], 00:35:58.451 | 70.00th=[ 241], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.451 | 99.00th=[ 347], 99.50th=[ 405], 99.90th=[ 422], 99.95th=[ 422], 00:35:58.451 | 99.99th=[ 422] 00:35:58.451 bw ( KiB/s): min= 176, max= 1920, per=4.22%, avg=570.40, stdev=648.37, samples=20 00:35:58.451 iops : min= 44, max= 480, avg=142.60, stdev=162.09, samples=20 00:35:58.451 lat (msec) : 50=65.46%, 100=1.11%, 250=6.38%, 500=27.05% 00:35:58.451 cpu : usr=97.70%, sys=1.69%, ctx=28, majf=0, minf=21 00:35:58.451 IO depths : 1=4.2%, 2=8.7%, 4=19.7%, 8=59.0%, 16=8.3%, 32=0.0%, >=64=0.0% 00:35:58.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 complete : 0=0.0%, 4=92.5%, 8=1.8%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 issued rwts: total=1442,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.451 filename2: (groupid=0, jobs=1): err= 0: pid=1077093: Thu Jul 11 21:42:31 2024 00:35:58.451 read: IOPS=145, BW=581KiB/s (595kB/s)(5880KiB/10117msec) 00:35:58.451 slat (nsec): min=8513, max=87832, avg=30853.40, stdev=17264.65 00:35:58.451 clat (msec): min=19, max=463, avg=109.69, stdev=110.32 00:35:58.451 lat (msec): min=19, max=463, avg=109.72, stdev=110.31 00:35:58.451 clat percentiles (msec): 00:35:58.451 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.451 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.451 | 70.00th=[ 245], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.451 | 99.00th=[ 309], 99.50th=[ 309], 99.90th=[ 464], 99.95th=[ 464], 00:35:58.451 | 99.99th=[ 464] 00:35:58.451 bw ( KiB/s): min= 144, max= 1920, per=4.30%, avg=581.60, stdev=673.91, samples=20 00:35:58.451 iops : min= 36, max= 480, avg=145.40, stdev=168.48, samples=20 00:35:58.451 lat (msec) : 20=0.61%, 50=66.87%, 250=4.22%, 500=28.30% 00:35:58.451 cpu : usr=97.91%, sys=1.46%, ctx=23, majf=0, minf=28 00:35:58.451 IO depths : 1=4.3%, 2=10.5%, 4=25.0%, 8=52.0%, 16=8.2%, 32=0.0%, >=64=0.0% 00:35:58.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 issued rwts: total=1470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.451 filename2: (groupid=0, jobs=1): err= 0: pid=1077094: Thu Jul 11 21:42:31 2024 00:35:58.451 read: IOPS=145, BW=581KiB/s (595kB/s)(5880KiB/10116msec) 00:35:58.451 slat (usec): min=8, max=103, avg=29.16, stdev=17.00 00:35:58.451 clat (msec): min=18, max=416, avg=109.72, stdev=110.20 00:35:58.451 lat (msec): min=18, max=416, avg=109.75, stdev=110.19 00:35:58.451 clat percentiles (msec): 00:35:58.451 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.451 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.451 | 70.00th=[ 245], 80.00th=[ 266], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.451 | 99.00th=[ 305], 99.50th=[ 305], 99.90th=[ 418], 99.95th=[ 418], 00:35:58.451 | 99.99th=[ 418] 00:35:58.451 bw ( KiB/s): min= 144, max= 1920, per=4.30%, avg=581.60, stdev=674.23, samples=20 00:35:58.451 iops : min= 36, max= 480, avg=145.40, stdev=168.56, samples=20 00:35:58.451 lat (msec) : 20=0.14%, 50=67.35%, 250=4.08%, 500=28.44% 00:35:58.451 cpu : usr=98.09%, sys=1.35%, ctx=37, majf=0, minf=31 00:35:58.451 IO depths : 1=4.4%, 2=10.6%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:58.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 issued rwts: total=1470,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.451 filename2: (groupid=0, jobs=1): err= 0: pid=1077095: Thu Jul 11 21:42:31 2024 00:35:58.451 read: IOPS=148, BW=593KiB/s (607kB/s)(6008KiB/10140msec) 00:35:58.451 slat (usec): min=6, max=115, avg=45.35, stdev=27.10 00:35:58.451 clat (msec): min=10, max=351, avg=107.51, stdev=108.72 00:35:58.451 lat (msec): min=10, max=351, avg=107.56, stdev=108.70 00:35:58.451 clat percentiles (msec): 00:35:58.451 | 1.00th=[ 11], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:58.451 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.451 | 70.00th=[ 239], 80.00th=[ 264], 90.00th=[ 271], 95.00th=[ 279], 00:35:58.451 | 99.00th=[ 288], 99.50th=[ 288], 99.90th=[ 351], 99.95th=[ 351], 00:35:58.451 | 99.99th=[ 351] 00:35:58.451 bw ( KiB/s): min= 144, max= 2048, per=4.40%, avg=594.40, stdev=687.92, samples=20 00:35:58.451 iops : min= 36, max= 512, avg=148.60, stdev=171.98, samples=20 00:35:58.451 lat (msec) : 20=2.13%, 50=64.98%, 100=1.07%, 250=4.93%, 500=26.90% 00:35:58.451 cpu : usr=98.18%, sys=1.33%, ctx=37, majf=0, minf=30 00:35:58.451 IO depths : 1=4.4%, 2=10.7%, 4=25.0%, 8=51.9%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:58.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 issued rwts: total=1502,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.451 filename2: (groupid=0, jobs=1): err= 0: pid=1077096: Thu Jul 11 21:42:31 2024 00:35:58.451 read: IOPS=139, BW=557KiB/s (570kB/s)(5632KiB/10111msec) 00:35:58.451 slat (nsec): min=8361, max=55689, avg=20091.64, stdev=9418.15 00:35:58.451 clat (msec): min=32, max=540, avg=114.34, stdev=120.69 00:35:58.451 lat (msec): min=32, max=540, avg=114.36, stdev=120.68 00:35:58.451 clat percentiles (msec): 00:35:58.451 | 1.00th=[ 33], 5.00th=[ 34], 10.00th=[ 34], 20.00th=[ 34], 00:35:58.451 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.451 | 70.00th=[ 241], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 321], 00:35:58.451 | 99.00th=[ 456], 99.50th=[ 456], 99.90th=[ 542], 99.95th=[ 542], 00:35:58.451 | 99.99th=[ 542] 00:35:58.451 bw ( KiB/s): min= 128, max= 1920, per=4.12%, avg=556.80, stdev=656.39, samples=20 00:35:58.451 iops : min= 32, max= 480, avg=139.20, stdev=164.10, samples=20 00:35:58.451 lat (msec) : 50=67.05%, 100=1.14%, 250=3.69%, 500=27.98%, 750=0.14% 00:35:58.451 cpu : usr=98.18%, sys=1.41%, ctx=15, majf=0, minf=29 00:35:58.451 IO depths : 1=4.5%, 2=9.8%, 4=22.2%, 8=55.5%, 16=8.0%, 32=0.0%, >=64=0.0% 00:35:58.451 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 complete : 0=0.0%, 4=93.3%, 8=0.9%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.451 issued rwts: total=1408,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.451 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.451 filename2: (groupid=0, jobs=1): err= 0: pid=1077097: Thu Jul 11 21:42:31 2024 00:35:58.451 read: IOPS=143, BW=576KiB/s (589kB/s)(5824KiB/10117msec) 00:35:58.451 slat (nsec): min=7563, max=93673, avg=34496.48, stdev=19476.66 00:35:58.451 clat (msec): min=19, max=512, avg=110.51, stdev=114.36 00:35:58.451 lat (msec): min=19, max=512, avg=110.54, stdev=114.34 00:35:58.451 clat percentiles (msec): 00:35:58.451 | 1.00th=[ 21], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 34], 00:35:58.451 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 35], 00:35:58.451 | 70.00th=[ 255], 80.00th=[ 264], 90.00th=[ 275], 95.00th=[ 279], 00:35:58.451 | 99.00th=[ 430], 99.50th=[ 430], 99.90th=[ 514], 99.95th=[ 514], 00:35:58.451 | 99.99th=[ 514] 00:35:58.451 bw ( KiB/s): min= 128, max= 1920, per=4.26%, avg=576.00, stdev=677.35, samples=20 00:35:58.451 iops : min= 32, max= 480, avg=144.00, stdev=169.34, samples=20 00:35:58.451 lat (msec) : 20=0.82%, 50=67.31%, 250=1.51%, 500=30.22%, 750=0.14% 00:35:58.451 cpu : usr=98.33%, sys=1.24%, ctx=14, majf=0, minf=31 00:35:58.451 IO depths : 1=4.4%, 2=8.9%, 4=19.6%, 8=59.0%, 16=8.1%, 32=0.0%, >=64=0.0% 00:35:58.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.452 complete : 0=0.0%, 4=92.5%, 8=1.7%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.452 issued rwts: total=1456,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.452 filename2: (groupid=0, jobs=1): err= 0: pid=1077098: Thu Jul 11 21:42:31 2024 00:35:58.452 read: IOPS=128, BW=513KiB/s (525kB/s)(5184KiB/10106msec) 00:35:58.452 slat (usec): min=8, max=112, avg=56.35, stdev=20.51 00:35:58.452 clat (msec): min=31, max=537, avg=124.26, stdev=157.17 00:35:58.452 lat (msec): min=32, max=537, avg=124.31, stdev=157.15 00:35:58.452 clat percentiles (msec): 00:35:58.452 | 1.00th=[ 33], 5.00th=[ 33], 10.00th=[ 33], 20.00th=[ 33], 00:35:58.452 | 30.00th=[ 34], 40.00th=[ 34], 50.00th=[ 34], 60.00th=[ 34], 00:35:58.452 | 70.00th=[ 37], 80.00th=[ 342], 90.00th=[ 426], 95.00th=[ 443], 00:35:58.452 | 99.00th=[ 514], 99.50th=[ 531], 99.90th=[ 542], 99.95th=[ 542], 00:35:58.452 | 99.99th=[ 542] 00:35:58.452 bw ( KiB/s): min= 128, max= 1920, per=3.79%, avg=512.00, stdev=680.98, samples=20 00:35:58.452 iops : min= 32, max= 480, avg=128.00, stdev=170.25, samples=20 00:35:58.452 lat (msec) : 50=72.84%, 100=1.23%, 250=2.31%, 500=22.22%, 750=1.39% 00:35:58.452 cpu : usr=98.38%, sys=1.20%, ctx=18, majf=0, minf=26 00:35:58.452 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:35:58.452 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.452 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:35:58.452 issued rwts: total=1296,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:35:58.452 latency : target=0, window=0, percentile=100.00%, depth=16 00:35:58.452 00:35:58.452 Run status group 0 (all jobs): 00:35:58.452 READ: bw=13.2MiB/s (13.8MB/s), 507KiB/s-600KiB/s (519kB/s-614kB/s), io=134MiB (140MB), run=10101-10142msec 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 bdev_null0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 [2024-07-11 21:42:31.854677] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 bdev_null1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:35:58.452 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.453 { 00:35:58.453 "params": { 00:35:58.453 "name": "Nvme$subsystem", 00:35:58.453 "trtype": "$TEST_TRANSPORT", 00:35:58.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.453 "adrfam": "ipv4", 00:35:58.453 "trsvcid": "$NVMF_PORT", 00:35:58.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.453 "hdgst": ${hdgst:-false}, 00:35:58.453 "ddgst": ${ddgst:-false} 00:35:58.453 }, 00:35:58.453 "method": "bdev_nvme_attach_controller" 00:35:58.453 } 00:35:58.453 EOF 00:35:58.453 )") 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:35:58.453 { 00:35:58.453 "params": { 00:35:58.453 "name": "Nvme$subsystem", 00:35:58.453 "trtype": "$TEST_TRANSPORT", 00:35:58.453 "traddr": "$NVMF_FIRST_TARGET_IP", 00:35:58.453 "adrfam": "ipv4", 00:35:58.453 "trsvcid": "$NVMF_PORT", 00:35:58.453 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:35:58.453 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:35:58.453 "hdgst": ${hdgst:-false}, 00:35:58.453 "ddgst": ${ddgst:-false} 00:35:58.453 }, 00:35:58.453 "method": "bdev_nvme_attach_controller" 00:35:58.453 } 00:35:58.453 EOF 00:35:58.453 )") 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:35:58.453 "params": { 00:35:58.453 "name": "Nvme0", 00:35:58.453 "trtype": "tcp", 00:35:58.453 "traddr": "10.0.0.2", 00:35:58.453 "adrfam": "ipv4", 00:35:58.453 "trsvcid": "4420", 00:35:58.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:58.453 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:58.453 "hdgst": false, 00:35:58.453 "ddgst": false 00:35:58.453 }, 00:35:58.453 "method": "bdev_nvme_attach_controller" 00:35:58.453 },{ 00:35:58.453 "params": { 00:35:58.453 "name": "Nvme1", 00:35:58.453 "trtype": "tcp", 00:35:58.453 "traddr": "10.0.0.2", 00:35:58.453 "adrfam": "ipv4", 00:35:58.453 "trsvcid": "4420", 00:35:58.453 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:35:58.453 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:35:58.453 "hdgst": false, 00:35:58.453 "ddgst": false 00:35:58.453 }, 00:35:58.453 "method": "bdev_nvme_attach_controller" 00:35:58.453 }' 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:35:58.453 21:42:31 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:35:58.453 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:58.453 ... 00:35:58.453 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:35:58.453 ... 00:35:58.453 fio-3.35 00:35:58.453 Starting 4 threads 00:35:58.453 EAL: No free 2048 kB hugepages reported on node 1 00:36:03.712 00:36:03.712 filename0: (groupid=0, jobs=1): err= 0: pid=1078470: Thu Jul 11 21:42:37 2024 00:36:03.712 read: IOPS=1827, BW=14.3MiB/s (15.0MB/s)(71.4MiB/5003msec) 00:36:03.712 slat (nsec): min=4975, max=74114, avg=21270.94, stdev=9308.64 00:36:03.712 clat (usec): min=888, max=8162, avg=4307.81, stdev=442.69 00:36:03.712 lat (usec): min=911, max=8176, avg=4329.08, stdev=442.98 00:36:03.712 clat percentiles (usec): 00:36:03.712 | 1.00th=[ 2933], 5.00th=[ 3654], 10.00th=[ 3982], 20.00th=[ 4113], 00:36:03.712 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:03.712 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4686], 00:36:03.712 | 99.00th=[ 5800], 99.50th=[ 6652], 99.90th=[ 7439], 99.95th=[ 7832], 00:36:03.712 | 99.99th=[ 8160] 00:36:03.712 bw ( KiB/s): min=14208, max=15344, per=25.15%, avg=14616.00, stdev=399.56, samples=10 00:36:03.712 iops : min= 1776, max= 1918, avg=1827.00, stdev=49.94, samples=10 00:36:03.712 lat (usec) : 1000=0.01% 00:36:03.712 lat (msec) : 2=0.20%, 4=10.90%, 10=88.89% 00:36:03.712 cpu : usr=94.62%, sys=4.78%, ctx=20, majf=0, minf=113 00:36:03.712 IO depths : 1=0.2%, 2=12.8%, 4=60.1%, 8=27.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.712 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.712 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.712 issued rwts: total=9143,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.712 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:03.713 filename0: (groupid=0, jobs=1): err= 0: pid=1078471: Thu Jul 11 21:42:37 2024 00:36:03.713 read: IOPS=1821, BW=14.2MiB/s (14.9MB/s)(71.2MiB/5001msec) 00:36:03.713 slat (nsec): min=5266, max=70789, avg=21044.49, stdev=10975.61 00:36:03.713 clat (usec): min=837, max=8004, avg=4311.56, stdev=510.57 00:36:03.713 lat (usec): min=857, max=8012, avg=4332.60, stdev=511.27 00:36:03.713 clat percentiles (usec): 00:36:03.713 | 1.00th=[ 2638], 5.00th=[ 3654], 10.00th=[ 3982], 20.00th=[ 4113], 00:36:03.713 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:03.713 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:36:03.713 | 99.00th=[ 6456], 99.50th=[ 7046], 99.90th=[ 7439], 99.95th=[ 7570], 00:36:03.713 | 99.99th=[ 8029] 00:36:03.713 bw ( KiB/s): min=14208, max=14976, per=24.92%, avg=14483.56, stdev=305.03, samples=9 00:36:03.713 iops : min= 1776, max= 1872, avg=1810.44, stdev=38.13, samples=9 00:36:03.713 lat (usec) : 1000=0.08% 00:36:03.713 lat (msec) : 2=0.52%, 4=10.84%, 10=88.56% 00:36:03.713 cpu : usr=92.82%, sys=6.46%, ctx=10, majf=0, minf=91 00:36:03.713 IO depths : 1=0.1%, 2=19.2%, 4=54.3%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.713 complete : 0=0.0%, 4=91.3%, 8=8.7%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.713 issued rwts: total=9111,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:03.713 filename1: (groupid=0, jobs=1): err= 0: pid=1078472: Thu Jul 11 21:42:37 2024 00:36:03.713 read: IOPS=1815, BW=14.2MiB/s (14.9MB/s)(71.0MiB/5002msec) 00:36:03.713 slat (nsec): min=4911, max=73120, avg=21298.44, stdev=11070.47 00:36:03.713 clat (usec): min=755, max=7904, avg=4325.87, stdev=499.60 00:36:03.713 lat (usec): min=769, max=7931, avg=4347.17, stdev=500.00 00:36:03.713 clat percentiles (usec): 00:36:03.713 | 1.00th=[ 2802], 5.00th=[ 3785], 10.00th=[ 4015], 20.00th=[ 4113], 00:36:03.713 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4293], 60.00th=[ 4359], 00:36:03.713 | 70.00th=[ 4424], 80.00th=[ 4490], 90.00th=[ 4621], 95.00th=[ 4752], 00:36:03.713 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7439], 99.95th=[ 7701], 00:36:03.713 | 99.99th=[ 7898] 00:36:03.713 bw ( KiB/s): min=14080, max=15008, per=24.99%, avg=14522.80, stdev=366.78, samples=10 00:36:03.713 iops : min= 1760, max= 1876, avg=1815.30, stdev=45.88, samples=10 00:36:03.713 lat (usec) : 1000=0.08% 00:36:03.713 lat (msec) : 2=0.43%, 4=9.25%, 10=90.25% 00:36:03.713 cpu : usr=92.88%, sys=6.40%, ctx=16, majf=0, minf=89 00:36:03.713 IO depths : 1=0.1%, 2=18.0%, 4=56.0%, 8=26.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.713 complete : 0=0.0%, 4=90.9%, 8=9.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.713 issued rwts: total=9083,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:03.713 filename1: (groupid=0, jobs=1): err= 0: pid=1078473: Thu Jul 11 21:42:37 2024 00:36:03.713 read: IOPS=1800, BW=14.1MiB/s (14.8MB/s)(70.4MiB/5001msec) 00:36:03.713 slat (nsec): min=5309, max=69861, avg=20634.86, stdev=10636.51 00:36:03.713 clat (usec): min=836, max=8095, avg=4367.61, stdev=569.84 00:36:03.713 lat (usec): min=850, max=8108, avg=4388.24, stdev=570.04 00:36:03.713 clat percentiles (usec): 00:36:03.713 | 1.00th=[ 2147], 5.00th=[ 3916], 10.00th=[ 4047], 20.00th=[ 4146], 00:36:03.713 | 30.00th=[ 4228], 40.00th=[ 4293], 50.00th=[ 4359], 60.00th=[ 4424], 00:36:03.713 | 70.00th=[ 4490], 80.00th=[ 4555], 90.00th=[ 4686], 95.00th=[ 4948], 00:36:03.713 | 99.00th=[ 6915], 99.50th=[ 7308], 99.90th=[ 7832], 99.95th=[ 8029], 00:36:03.713 | 99.99th=[ 8094] 00:36:03.713 bw ( KiB/s): min=13888, max=14976, per=24.77%, avg=14392.44, stdev=367.48, samples=9 00:36:03.713 iops : min= 1736, max= 1872, avg=1799.00, stdev=45.99, samples=9 00:36:03.713 lat (usec) : 1000=0.10% 00:36:03.713 lat (msec) : 2=0.82%, 4=6.84%, 10=92.24% 00:36:03.713 cpu : usr=93.14%, sys=6.10%, ctx=21, majf=0, minf=86 00:36:03.713 IO depths : 1=0.1%, 2=15.9%, 4=57.3%, 8=26.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:03.713 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.713 complete : 0=0.0%, 4=91.5%, 8=8.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:03.713 issued rwts: total=9005,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:03.713 latency : target=0, window=0, percentile=100.00%, depth=8 00:36:03.713 00:36:03.713 Run status group 0 (all jobs): 00:36:03.713 READ: bw=56.8MiB/s (59.5MB/s), 14.1MiB/s-14.3MiB/s (14.8MB/s-15.0MB/s), io=284MiB (298MB), run=5001-5003msec 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.713 00:36:03.713 real 0m24.300s 00:36:03.713 user 4m35.587s 00:36:03.713 sys 0m6.288s 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:03.713 21:42:38 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:36:03.713 ************************************ 00:36:03.713 END TEST fio_dif_rand_params 00:36:03.713 ************************************ 00:36:03.713 21:42:38 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:03.713 21:42:38 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:36:03.713 21:42:38 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:03.713 21:42:38 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:03.713 21:42:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:03.713 ************************************ 00:36:03.713 START TEST fio_dif_digest 00:36:03.713 ************************************ 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:36:03.713 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 bdev_null0 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:03.714 [2024-07-11 21:42:38.279067] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:36:03.714 { 00:36:03.714 "params": { 00:36:03.714 "name": "Nvme$subsystem", 00:36:03.714 "trtype": "$TEST_TRANSPORT", 00:36:03.714 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:03.714 "adrfam": "ipv4", 00:36:03.714 "trsvcid": "$NVMF_PORT", 00:36:03.714 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:03.714 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:03.714 "hdgst": ${hdgst:-false}, 00:36:03.714 "ddgst": ${ddgst:-false} 00:36:03.714 }, 00:36:03.714 "method": "bdev_nvme_attach_controller" 00:36:03.714 } 00:36:03.714 EOF 00:36:03.714 )") 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:36:03.714 "params": { 00:36:03.714 "name": "Nvme0", 00:36:03.714 "trtype": "tcp", 00:36:03.714 "traddr": "10.0.0.2", 00:36:03.714 "adrfam": "ipv4", 00:36:03.714 "trsvcid": "4420", 00:36:03.714 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:03.714 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:03.714 "hdgst": true, 00:36:03.714 "ddgst": true 00:36:03.714 }, 00:36:03.714 "method": "bdev_nvme_attach_controller" 00:36:03.714 }' 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:36:03.714 21:42:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:36:03.972 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:36:03.972 ... 00:36:03.972 fio-3.35 00:36:03.972 Starting 3 threads 00:36:03.972 EAL: No free 2048 kB hugepages reported on node 1 00:36:16.195 00:36:16.195 filename0: (groupid=0, jobs=1): err= 0: pid=1079231: Thu Jul 11 21:42:49 2024 00:36:16.195 read: IOPS=198, BW=24.8MiB/s (26.0MB/s)(249MiB/10049msec) 00:36:16.195 slat (nsec): min=4730, max=56222, avg=16973.75, stdev=5699.16 00:36:16.195 clat (usec): min=11454, max=57162, avg=15077.53, stdev=3520.77 00:36:16.195 lat (usec): min=11467, max=57176, avg=15094.50, stdev=3520.55 00:36:16.195 clat percentiles (usec): 00:36:16.195 | 1.00th=[12256], 5.00th=[13042], 10.00th=[13304], 20.00th=[13829], 00:36:16.195 | 30.00th=[14222], 40.00th=[14484], 50.00th=[14746], 60.00th=[15139], 00:36:16.195 | 70.00th=[15401], 80.00th=[15795], 90.00th=[16188], 95.00th=[16909], 00:36:16.195 | 99.00th=[18220], 99.50th=[54264], 99.90th=[56361], 99.95th=[57410], 00:36:16.195 | 99.99th=[57410] 00:36:16.196 bw ( KiB/s): min=22784, max=27648, per=34.03%, avg=25484.80, stdev=1287.99, samples=20 00:36:16.196 iops : min= 178, max= 216, avg=199.10, stdev=10.06, samples=20 00:36:16.196 lat (msec) : 20=99.15%, 50=0.15%, 100=0.70% 00:36:16.196 cpu : usr=93.12%, sys=6.43%, ctx=27, majf=0, minf=124 00:36:16.196 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.196 issued rwts: total=1994,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:16.196 filename0: (groupid=0, jobs=1): err= 0: pid=1079232: Thu Jul 11 21:42:49 2024 00:36:16.196 read: IOPS=193, BW=24.2MiB/s (25.4MB/s)(243MiB/10048msec) 00:36:16.196 slat (usec): min=7, max=111, avg=16.21, stdev= 5.68 00:36:16.196 clat (usec): min=9116, max=54145, avg=15455.06, stdev=1798.84 00:36:16.196 lat (usec): min=9135, max=54158, avg=15471.27, stdev=1798.61 00:36:16.196 clat percentiles (usec): 00:36:16.196 | 1.00th=[10945], 5.00th=[13304], 10.00th=[13960], 20.00th=[14484], 00:36:16.196 | 30.00th=[14877], 40.00th=[15139], 50.00th=[15401], 60.00th=[15664], 00:36:16.196 | 70.00th=[16057], 80.00th=[16450], 90.00th=[17171], 95.00th=[17695], 00:36:16.196 | 99.00th=[18482], 99.50th=[19006], 99.90th=[49546], 99.95th=[54264], 00:36:16.196 | 99.99th=[54264] 00:36:16.196 bw ( KiB/s): min=23342, max=26880, per=33.20%, avg=24859.90, stdev=960.81, samples=20 00:36:16.196 iops : min= 182, max= 210, avg=194.20, stdev= 7.54, samples=20 00:36:16.196 lat (msec) : 10=0.15%, 20=99.59%, 50=0.21%, 100=0.05% 00:36:16.196 cpu : usr=92.67%, sys=6.86%, ctx=29, majf=0, minf=217 00:36:16.196 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.196 issued rwts: total=1945,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:16.196 filename0: (groupid=0, jobs=1): err= 0: pid=1079233: Thu Jul 11 21:42:49 2024 00:36:16.196 read: IOPS=193, BW=24.1MiB/s (25.3MB/s)(243MiB/10048msec) 00:36:16.196 slat (nsec): min=6503, max=49671, avg=18493.67, stdev=6134.71 00:36:16.196 clat (usec): min=8876, max=56512, avg=15492.94, stdev=1847.77 00:36:16.196 lat (usec): min=8897, max=56524, avg=15511.44, stdev=1847.37 00:36:16.196 clat percentiles (usec): 00:36:16.196 | 1.00th=[10159], 5.00th=[13435], 10.00th=[13960], 20.00th=[14484], 00:36:16.196 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:36:16.196 | 70.00th=[16188], 80.00th=[16581], 90.00th=[16909], 95.00th=[17433], 00:36:16.196 | 99.00th=[18220], 99.50th=[19006], 99.90th=[49021], 99.95th=[56361], 00:36:16.196 | 99.99th=[56361] 00:36:16.196 bw ( KiB/s): min=23296, max=27392, per=33.11%, avg=24793.60, stdev=1018.17, samples=20 00:36:16.196 iops : min= 182, max= 214, avg=193.70, stdev= 7.95, samples=20 00:36:16.196 lat (msec) : 10=0.77%, 20=98.97%, 50=0.21%, 100=0.05% 00:36:16.196 cpu : usr=93.32%, sys=6.20%, ctx=26, majf=0, minf=152 00:36:16.196 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:36:16.196 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.196 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:36:16.196 issued rwts: total=1940,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:36:16.196 latency : target=0, window=0, percentile=100.00%, depth=3 00:36:16.196 00:36:16.196 Run status group 0 (all jobs): 00:36:16.196 READ: bw=73.1MiB/s (76.7MB/s), 24.1MiB/s-24.8MiB/s (25.3MB/s-26.0MB/s), io=735MiB (771MB), run=10048-10049msec 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:16.196 00:36:16.196 real 0m11.151s 00:36:16.196 user 0m29.030s 00:36:16.196 sys 0m2.233s 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:16.196 21:42:49 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:36:16.196 ************************************ 00:36:16.196 END TEST fio_dif_digest 00:36:16.196 ************************************ 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:36:16.196 21:42:49 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:36:16.196 21:42:49 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:16.196 rmmod nvme_tcp 00:36:16.196 rmmod nvme_fabrics 00:36:16.196 rmmod nvme_keyring 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 1072689 ']' 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 1072689 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 1072689 ']' 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 1072689 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1072689 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1072689' 00:36:16.196 killing process with pid 1072689 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@967 -- # kill 1072689 00:36:16.196 21:42:49 nvmf_dif -- common/autotest_common.sh@972 -- # wait 1072689 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:16.196 21:42:49 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:16.196 Waiting for block devices as requested 00:36:16.196 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:16.196 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:16.455 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:16.455 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:16.455 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:16.713 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:16.713 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:16.713 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:16.713 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:16.713 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:16.971 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:16.971 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:16.971 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:17.272 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:17.272 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:17.272 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:17.272 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:17.532 21:42:52 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:17.532 21:42:52 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:17.532 21:42:52 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:17.532 21:42:52 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:17.532 21:42:52 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:17.532 21:42:52 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:17.532 21:42:52 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.436 21:42:54 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:19.436 00:36:19.436 real 1m6.453s 00:36:19.436 user 6m31.568s 00:36:19.436 sys 0m17.808s 00:36:19.436 21:42:54 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:19.436 21:42:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:36:19.436 ************************************ 00:36:19.436 END TEST nvmf_dif 00:36:19.436 ************************************ 00:36:19.436 21:42:54 -- common/autotest_common.sh@1142 -- # return 0 00:36:19.436 21:42:54 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:19.436 21:42:54 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:19.436 21:42:54 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:19.436 21:42:54 -- common/autotest_common.sh@10 -- # set +x 00:36:19.436 ************************************ 00:36:19.436 START TEST nvmf_abort_qd_sizes 00:36:19.436 ************************************ 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:36:19.436 * Looking for test storage... 00:36:19.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.436 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:36:19.437 21:42:54 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.336 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:36:21.337 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:36:21.337 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:36:21.337 Found net devices under 0000:0a:00.0: cvl_0_0 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:36:21.337 Found net devices under 0000:0a:00.1: cvl_0_1 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.337 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:36:21.595 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.595 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:36:21.595 00:36:21.595 --- 10.0.0.2 ping statistics --- 00:36:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.595 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.143 ms 00:36:21.595 00:36:21.595 --- 10.0.0.1 ping statistics --- 00:36:21.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.595 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:36:21.595 21:42:56 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:22.971 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:22.971 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:22.971 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:22.972 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:23.909 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=1084015 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 1084015 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 1084015 ']' 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:23.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:23.909 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:23.909 [2024-07-11 21:42:58.537305] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:23.909 [2024-07-11 21:42:58.537377] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:23.909 EAL: No free 2048 kB hugepages reported on node 1 00:36:23.909 [2024-07-11 21:42:58.600010] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:24.167 [2024-07-11 21:42:58.686788] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:24.167 [2024-07-11 21:42:58.686831] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:24.167 [2024-07-11 21:42:58.686854] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:24.167 [2024-07-11 21:42:58.686865] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:24.167 [2024-07-11 21:42:58.686875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:24.167 [2024-07-11 21:42:58.686935] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:24.167 [2024-07-11 21:42:58.686957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:36:24.167 [2024-07-11 21:42:58.687012] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:36:24.167 [2024-07-11 21:42:58.687014] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:24.167 21:42:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:24.167 ************************************ 00:36:24.167 START TEST spdk_target_abort 00:36:24.167 ************************************ 00:36:24.167 21:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:36:24.167 21:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:36:24.167 21:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:36:24.167 21:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:24.167 21:42:58 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 spdk_targetn1 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 [2024-07-11 21:43:01.695898] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:27.447 [2024-07-11 21:43:01.728165] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:27.447 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:27.448 21:43:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:27.448 EAL: No free 2048 kB hugepages reported on node 1 00:36:30.723 Initializing NVMe Controllers 00:36:30.723 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:30.723 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:30.723 Initialization complete. Launching workers. 00:36:30.723 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10025, failed: 0 00:36:30.723 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1346, failed to submit 8679 00:36:30.723 success 777, unsuccess 569, failed 0 00:36:30.723 21:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:30.723 21:43:05 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:30.723 EAL: No free 2048 kB hugepages reported on node 1 00:36:34.001 Initializing NVMe Controllers 00:36:34.001 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:34.001 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:34.001 Initialization complete. Launching workers. 00:36:34.001 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8670, failed: 0 00:36:34.001 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1276, failed to submit 7394 00:36:34.001 success 322, unsuccess 954, failed 0 00:36:34.001 21:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:34.001 21:43:08 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:34.001 EAL: No free 2048 kB hugepages reported on node 1 00:36:37.307 Initializing NVMe Controllers 00:36:37.307 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:36:37.307 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:37.307 Initialization complete. Launching workers. 00:36:37.307 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31631, failed: 0 00:36:37.307 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2691, failed to submit 28940 00:36:37.307 success 547, unsuccess 2144, failed 0 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:37.307 21:43:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 1084015 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 1084015 ']' 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 1084015 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1084015 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1084015' 00:36:38.238 killing process with pid 1084015 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 1084015 00:36:38.238 21:43:12 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 1084015 00:36:38.495 00:36:38.495 real 0m14.282s 00:36:38.495 user 0m53.535s 00:36:38.495 sys 0m2.763s 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:38.495 ************************************ 00:36:38.495 END TEST spdk_target_abort 00:36:38.495 ************************************ 00:36:38.495 21:43:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:38.495 21:43:13 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:36:38.495 21:43:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:38.495 21:43:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:38.495 21:43:13 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:38.495 ************************************ 00:36:38.495 START TEST kernel_target_abort 00:36:38.495 ************************************ 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:36:38.495 21:43:13 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:39.867 Waiting for block devices as requested 00:36:39.867 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:39.867 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:39.867 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:39.867 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:40.125 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:40.125 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:40.125 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:40.125 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:40.382 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:40.382 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:40.382 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:40.382 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:40.639 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:40.639 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:40.639 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:40.639 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:40.897 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:40.897 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:36:40.898 No valid GPT data, bailing 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:40.898 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:36:41.156 00:36:41.156 Discovery Log Number of Records 2, Generation counter 2 00:36:41.156 =====Discovery Log Entry 0====== 00:36:41.156 trtype: tcp 00:36:41.156 adrfam: ipv4 00:36:41.156 subtype: current discovery subsystem 00:36:41.156 treq: not specified, sq flow control disable supported 00:36:41.156 portid: 1 00:36:41.156 trsvcid: 4420 00:36:41.156 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:36:41.156 traddr: 10.0.0.1 00:36:41.156 eflags: none 00:36:41.156 sectype: none 00:36:41.156 =====Discovery Log Entry 1====== 00:36:41.156 trtype: tcp 00:36:41.156 adrfam: ipv4 00:36:41.156 subtype: nvme subsystem 00:36:41.156 treq: not specified, sq flow control disable supported 00:36:41.156 portid: 1 00:36:41.156 trsvcid: 4420 00:36:41.156 subnqn: nqn.2016-06.io.spdk:testnqn 00:36:41.156 traddr: 10.0.0.1 00:36:41.156 eflags: none 00:36:41.156 sectype: none 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:41.156 21:43:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:41.156 EAL: No free 2048 kB hugepages reported on node 1 00:36:44.435 Initializing NVMe Controllers 00:36:44.435 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:44.435 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:44.435 Initialization complete. Launching workers. 00:36:44.435 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38863, failed: 0 00:36:44.435 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 38863, failed to submit 0 00:36:44.435 success 0, unsuccess 38863, failed 0 00:36:44.435 21:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:44.435 21:43:18 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:44.435 EAL: No free 2048 kB hugepages reported on node 1 00:36:47.716 Initializing NVMe Controllers 00:36:47.716 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:47.716 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:47.716 Initialization complete. Launching workers. 00:36:47.716 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 80560, failed: 0 00:36:47.716 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 20326, failed to submit 60234 00:36:47.716 success 0, unsuccess 20326, failed 0 00:36:47.716 21:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:36:47.716 21:43:21 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:36:47.716 EAL: No free 2048 kB hugepages reported on node 1 00:36:50.993 Initializing NVMe Controllers 00:36:50.993 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:36:50.993 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:36:50.993 Initialization complete. Launching workers. 00:36:50.993 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 74291, failed: 0 00:36:50.993 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 18530, failed to submit 55761 00:36:50.993 success 0, unsuccess 18530, failed 0 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:36:50.993 21:43:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:36:51.561 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:51.561 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:51.561 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:36:51.561 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:36:51.819 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:36:51.819 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:36:51.819 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:36:51.819 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:36:51.819 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:36:51.819 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:36:52.753 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:36:52.753 00:36:52.753 real 0m14.222s 00:36:52.753 user 0m5.832s 00:36:52.753 sys 0m3.296s 00:36:52.753 21:43:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:52.753 21:43:27 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:36:52.753 ************************************ 00:36:52.754 END TEST kernel_target_abort 00:36:52.754 ************************************ 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:36:52.754 rmmod nvme_tcp 00:36:52.754 rmmod nvme_fabrics 00:36:52.754 rmmod nvme_keyring 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 1084015 ']' 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 1084015 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 1084015 ']' 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 1084015 00:36:52.754 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (1084015) - No such process 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 1084015 is not found' 00:36:52.754 Process with pid 1084015 is not found 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:36:52.754 21:43:27 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:36:54.127 Waiting for block devices as requested 00:36:54.127 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:36:54.127 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:54.127 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:54.386 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:54.386 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:54.386 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:54.386 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:54.386 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:54.644 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:54.644 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:36:54.644 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:36:54.644 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:36:54.902 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:36:54.902 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:36:54.902 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:36:54.902 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:36:55.161 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:36:55.161 21:43:29 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:57.692 21:43:31 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:36:57.692 00:36:57.692 real 0m37.745s 00:36:57.692 user 1m1.460s 00:36:57.692 sys 0m9.315s 00:36:57.692 21:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:36:57.692 21:43:31 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:36:57.692 ************************************ 00:36:57.692 END TEST nvmf_abort_qd_sizes 00:36:57.692 ************************************ 00:36:57.692 21:43:31 -- common/autotest_common.sh@1142 -- # return 0 00:36:57.692 21:43:31 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:57.692 21:43:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:36:57.692 21:43:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:36:57.692 21:43:31 -- common/autotest_common.sh@10 -- # set +x 00:36:57.692 ************************************ 00:36:57.692 START TEST keyring_file 00:36:57.692 ************************************ 00:36:57.692 21:43:31 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:36:57.692 * Looking for test storage... 00:36:57.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:36:57.692 21:43:31 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:36:57.692 21:43:31 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:57.692 21:43:31 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:57.692 21:43:32 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:57.692 21:43:32 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:57.692 21:43:32 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:57.692 21:43:32 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.692 21:43:32 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.692 21:43:32 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.692 21:43:32 keyring_file -- paths/export.sh@5 -- # export PATH 00:36:57.692 21:43:32 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@47 -- # : 0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.scsXbbR7Dx 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.scsXbbR7Dx 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.scsXbbR7Dx 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.scsXbbR7Dx 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@17 -- # name=key1 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.DmKkp3YOmv 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:36:57.692 21:43:32 keyring_file -- nvmf/common.sh@705 -- # python - 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.DmKkp3YOmv 00:36:57.692 21:43:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.DmKkp3YOmv 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.DmKkp3YOmv 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@30 -- # tgtpid=1089765 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:36:57.692 21:43:32 keyring_file -- keyring/file.sh@32 -- # waitforlisten 1089765 00:36:57.692 21:43:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1089765 ']' 00:36:57.692 21:43:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:57.692 21:43:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:57.692 21:43:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:57.692 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:57.692 21:43:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:57.692 21:43:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:57.692 [2024-07-11 21:43:32.166501] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:57.692 [2024-07-11 21:43:32.166584] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089765 ] 00:36:57.692 EAL: No free 2048 kB hugepages reported on node 1 00:36:57.692 [2024-07-11 21:43:32.227162] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:57.692 [2024-07-11 21:43:32.317208] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:36:57.950 21:43:32 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:57.950 21:43:32 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:57.950 21:43:32 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:36:57.950 21:43:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.950 21:43:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:57.951 [2024-07-11 21:43:32.581616] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:57.951 null0 00:36:57.951 [2024-07-11 21:43:32.613651] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:36:57.951 [2024-07-11 21:43:32.614158] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:36:57.951 [2024-07-11 21:43:32.621662] tcp.c:3679:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:36:57.951 21:43:32 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:57.951 [2024-07-11 21:43:32.633684] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:36:57.951 request: 00:36:57.951 { 00:36:57.951 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:36:57.951 "secure_channel": false, 00:36:57.951 "listen_address": { 00:36:57.951 "trtype": "tcp", 00:36:57.951 "traddr": "127.0.0.1", 00:36:57.951 "trsvcid": "4420" 00:36:57.951 }, 00:36:57.951 "method": "nvmf_subsystem_add_listener", 00:36:57.951 "req_id": 1 00:36:57.951 } 00:36:57.951 Got JSON-RPC error response 00:36:57.951 response: 00:36:57.951 { 00:36:57.951 "code": -32602, 00:36:57.951 "message": "Invalid parameters" 00:36:57.951 } 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:36:57.951 21:43:32 keyring_file -- keyring/file.sh@46 -- # bperfpid=1089769 00:36:57.951 21:43:32 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:36:57.951 21:43:32 keyring_file -- keyring/file.sh@48 -- # waitforlisten 1089769 /var/tmp/bperf.sock 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1089769 ']' 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:57.951 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:36:57.951 21:43:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:36:57.951 [2024-07-11 21:43:32.681855] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:36:57.951 [2024-07-11 21:43:32.681933] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1089769 ] 00:36:57.951 EAL: No free 2048 kB hugepages reported on node 1 00:36:58.217 [2024-07-11 21:43:32.743740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:58.217 [2024-07-11 21:43:32.835072] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:36:58.217 21:43:32 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:36:58.217 21:43:32 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:36:58.217 21:43:32 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:36:58.217 21:43:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:36:58.509 21:43:33 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DmKkp3YOmv 00:36:58.509 21:43:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DmKkp3YOmv 00:36:58.766 21:43:33 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:36:58.766 21:43:33 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:36:58.766 21:43:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:58.766 21:43:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:58.766 21:43:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.024 21:43:33 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.scsXbbR7Dx == \/\t\m\p\/\t\m\p\.\s\c\s\X\b\b\R\7\D\x ]] 00:36:59.024 21:43:33 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:36:59.024 21:43:33 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:36:59.024 21:43:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.024 21:43:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.024 21:43:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:59.282 21:43:33 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.DmKkp3YOmv == \/\t\m\p\/\t\m\p\.\D\m\K\k\p\3\Y\O\m\v ]] 00:36:59.282 21:43:33 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:36:59.282 21:43:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:36:59.282 21:43:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.282 21:43:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.282 21:43:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.282 21:43:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:36:59.540 21:43:34 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:36:59.540 21:43:34 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:36:59.540 21:43:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:36:59.540 21:43:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:36:59.540 21:43:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:36:59.540 21:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:59.540 21:43:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:36:59.797 21:43:34 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:36:59.797 21:43:34 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:36:59.797 21:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:00.055 [2024-07-11 21:43:34.660654] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:00.055 nvme0n1 00:37:00.055 21:43:34 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:37:00.055 21:43:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:00.055 21:43:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.055 21:43:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.055 21:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.055 21:43:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:00.313 21:43:34 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:37:00.313 21:43:34 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:37:00.313 21:43:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:00.313 21:43:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:00.313 21:43:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:00.313 21:43:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:00.313 21:43:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:00.572 21:43:35 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:37:00.572 21:43:35 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:00.830 Running I/O for 1 seconds... 00:37:01.765 00:37:01.765 Latency(us) 00:37:01.765 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:01.765 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:37:01.765 nvme0n1 : 1.01 7036.88 27.49 0.00 0.00 18077.46 6456.51 27185.30 00:37:01.765 =================================================================================================================== 00:37:01.765 Total : 7036.88 27.49 0.00 0.00 18077.46 6456.51 27185.30 00:37:01.765 0 00:37:01.765 21:43:36 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:01.765 21:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:02.022 21:43:36 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:37:02.022 21:43:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:02.022 21:43:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.023 21:43:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.023 21:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.023 21:43:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:02.280 21:43:36 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:37:02.280 21:43:36 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:37:02.280 21:43:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:02.280 21:43:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.280 21:43:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.280 21:43:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.280 21:43:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:02.538 21:43:37 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:37:02.538 21:43:37 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:02.538 21:43:37 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.538 21:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:37:02.795 [2024-07-11 21:43:37.366088] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:02.795 [2024-07-11 21:43:37.366983] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d268f0 (107): Transport endpoint is not connected 00:37:02.795 [2024-07-11 21:43:37.367977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d268f0 (9): Bad file descriptor 00:37:02.795 [2024-07-11 21:43:37.368977] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:02.796 [2024-07-11 21:43:37.368998] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:02.796 [2024-07-11 21:43:37.369012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:02.796 request: 00:37:02.796 { 00:37:02.796 "name": "nvme0", 00:37:02.796 "trtype": "tcp", 00:37:02.796 "traddr": "127.0.0.1", 00:37:02.796 "adrfam": "ipv4", 00:37:02.796 "trsvcid": "4420", 00:37:02.796 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:02.796 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:02.796 "prchk_reftag": false, 00:37:02.796 "prchk_guard": false, 00:37:02.796 "hdgst": false, 00:37:02.796 "ddgst": false, 00:37:02.796 "psk": "key1", 00:37:02.796 "method": "bdev_nvme_attach_controller", 00:37:02.796 "req_id": 1 00:37:02.796 } 00:37:02.796 Got JSON-RPC error response 00:37:02.796 response: 00:37:02.796 { 00:37:02.796 "code": -5, 00:37:02.796 "message": "Input/output error" 00:37:02.796 } 00:37:02.796 21:43:37 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:02.796 21:43:37 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:02.796 21:43:37 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:02.796 21:43:37 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:02.796 21:43:37 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:37:02.796 21:43:37 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:02.796 21:43:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:02.796 21:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:02.796 21:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:02.796 21:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:03.054 21:43:37 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:37:03.054 21:43:37 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:37:03.054 21:43:37 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:03.054 21:43:37 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:03.054 21:43:37 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:03.054 21:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.054 21:43:37 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:03.312 21:43:37 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:37:03.312 21:43:37 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:37:03.312 21:43:37 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:03.570 21:43:38 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:37:03.570 21:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:37:03.827 21:43:38 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:37:03.827 21:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:03.827 21:43:38 keyring_file -- keyring/file.sh@77 -- # jq length 00:37:04.085 21:43:38 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:37:04.085 21:43:38 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.scsXbbR7Dx 00:37:04.085 21:43:38 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:04.085 21:43:38 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:37:04.085 21:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:37:04.343 [2024-07-11 21:43:38.872971] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.scsXbbR7Dx': 0100660 00:37:04.343 [2024-07-11 21:43:38.873005] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:37:04.343 request: 00:37:04.343 { 00:37:04.343 "name": "key0", 00:37:04.343 "path": "/tmp/tmp.scsXbbR7Dx", 00:37:04.343 "method": "keyring_file_add_key", 00:37:04.343 "req_id": 1 00:37:04.343 } 00:37:04.343 Got JSON-RPC error response 00:37:04.343 response: 00:37:04.343 { 00:37:04.343 "code": -1, 00:37:04.343 "message": "Operation not permitted" 00:37:04.343 } 00:37:04.343 21:43:38 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:04.343 21:43:38 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:04.343 21:43:38 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:04.343 21:43:38 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:04.343 21:43:38 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.scsXbbR7Dx 00:37:04.343 21:43:38 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:37:04.343 21:43:38 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.scsXbbR7Dx 00:37:04.600 21:43:39 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.scsXbbR7Dx 00:37:04.600 21:43:39 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:37:04.600 21:43:39 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:04.600 21:43:39 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:04.600 21:43:39 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:04.600 21:43:39 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:04.600 21:43:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:04.858 21:43:39 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:37:04.858 21:43:39 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:04.858 21:43:39 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.858 21:43:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:04.858 [2024-07-11 21:43:39.611019] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.scsXbbR7Dx': No such file or directory 00:37:04.858 [2024-07-11 21:43:39.611078] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:37:04.858 [2024-07-11 21:43:39.611132] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:37:04.858 [2024-07-11 21:43:39.611147] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:37:04.858 [2024-07-11 21:43:39.611160] bdev_nvme.c:6268:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:37:04.858 request: 00:37:04.858 { 00:37:04.858 "name": "nvme0", 00:37:04.858 "trtype": "tcp", 00:37:04.858 "traddr": "127.0.0.1", 00:37:04.858 "adrfam": "ipv4", 00:37:04.858 "trsvcid": "4420", 00:37:04.858 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:04.858 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:04.858 "prchk_reftag": false, 00:37:04.858 "prchk_guard": false, 00:37:04.858 "hdgst": false, 00:37:04.858 "ddgst": false, 00:37:04.858 "psk": "key0", 00:37:04.858 "method": "bdev_nvme_attach_controller", 00:37:04.858 "req_id": 1 00:37:04.858 } 00:37:04.858 Got JSON-RPC error response 00:37:04.858 response: 00:37:04.859 { 00:37:04.859 "code": -19, 00:37:04.859 "message": "No such device" 00:37:04.859 } 00:37:05.136 21:43:39 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:37:05.136 21:43:39 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:05.136 21:43:39 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:05.136 21:43:39 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:05.136 21:43:39 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:05.136 21:43:39 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@17 -- # name=key0 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@17 -- # digest=0 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@18 -- # mktemp 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LUw8iBk7Hr 00:37:05.136 21:43:39 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:05.136 21:43:39 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:05.136 21:43:39 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:37:05.136 21:43:39 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:05.136 21:43:39 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:05.136 21:43:39 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:37:05.136 21:43:39 keyring_file -- nvmf/common.sh@705 -- # python - 00:37:05.393 21:43:39 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LUw8iBk7Hr 00:37:05.393 21:43:39 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LUw8iBk7Hr 00:37:05.393 21:43:39 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.LUw8iBk7Hr 00:37:05.393 21:43:39 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LUw8iBk7Hr 00:37:05.393 21:43:39 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LUw8iBk7Hr 00:37:05.650 21:43:40 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:05.650 21:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:05.908 nvme0n1 00:37:05.908 21:43:40 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:37:05.908 21:43:40 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:05.908 21:43:40 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:05.908 21:43:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:05.908 21:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:05.908 21:43:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.166 21:43:40 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:37:06.166 21:43:40 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:37:06.166 21:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:37:06.423 21:43:40 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:37:06.423 21:43:40 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:37:06.423 21:43:40 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.423 21:43:40 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.423 21:43:40 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.680 21:43:41 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:37:06.680 21:43:41 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:37:06.680 21:43:41 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:06.680 21:43:41 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:06.680 21:43:41 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:06.680 21:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:06.680 21:43:41 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:06.938 21:43:41 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:37:06.938 21:43:41 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:06.938 21:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:07.195 21:43:41 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:37:07.195 21:43:41 keyring_file -- keyring/file.sh@104 -- # jq length 00:37:07.195 21:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:07.195 21:43:41 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:37:07.195 21:43:41 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LUw8iBk7Hr 00:37:07.195 21:43:41 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LUw8iBk7Hr 00:37:07.762 21:43:42 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.DmKkp3YOmv 00:37:07.762 21:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.DmKkp3YOmv 00:37:07.762 21:43:42 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:07.762 21:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:37:08.020 nvme0n1 00:37:08.020 21:43:42 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:37:08.020 21:43:42 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:37:08.587 21:43:43 keyring_file -- keyring/file.sh@112 -- # config='{ 00:37:08.587 "subsystems": [ 00:37:08.587 { 00:37:08.587 "subsystem": "keyring", 00:37:08.587 "config": [ 00:37:08.587 { 00:37:08.587 "method": "keyring_file_add_key", 00:37:08.587 "params": { 00:37:08.587 "name": "key0", 00:37:08.587 "path": "/tmp/tmp.LUw8iBk7Hr" 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "keyring_file_add_key", 00:37:08.587 "params": { 00:37:08.587 "name": "key1", 00:37:08.587 "path": "/tmp/tmp.DmKkp3YOmv" 00:37:08.587 } 00:37:08.587 } 00:37:08.587 ] 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "subsystem": "iobuf", 00:37:08.587 "config": [ 00:37:08.587 { 00:37:08.587 "method": "iobuf_set_options", 00:37:08.587 "params": { 00:37:08.587 "small_pool_count": 8192, 00:37:08.587 "large_pool_count": 1024, 00:37:08.587 "small_bufsize": 8192, 00:37:08.587 "large_bufsize": 135168 00:37:08.587 } 00:37:08.587 } 00:37:08.587 ] 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "subsystem": "sock", 00:37:08.587 "config": [ 00:37:08.587 { 00:37:08.587 "method": "sock_set_default_impl", 00:37:08.587 "params": { 00:37:08.587 "impl_name": "posix" 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "sock_impl_set_options", 00:37:08.587 "params": { 00:37:08.587 "impl_name": "ssl", 00:37:08.587 "recv_buf_size": 4096, 00:37:08.587 "send_buf_size": 4096, 00:37:08.587 "enable_recv_pipe": true, 00:37:08.587 "enable_quickack": false, 00:37:08.587 "enable_placement_id": 0, 00:37:08.587 "enable_zerocopy_send_server": true, 00:37:08.587 "enable_zerocopy_send_client": false, 00:37:08.587 "zerocopy_threshold": 0, 00:37:08.587 "tls_version": 0, 00:37:08.587 "enable_ktls": false 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "sock_impl_set_options", 00:37:08.587 "params": { 00:37:08.587 "impl_name": "posix", 00:37:08.587 "recv_buf_size": 2097152, 00:37:08.587 "send_buf_size": 2097152, 00:37:08.587 "enable_recv_pipe": true, 00:37:08.587 "enable_quickack": false, 00:37:08.587 "enable_placement_id": 0, 00:37:08.587 "enable_zerocopy_send_server": true, 00:37:08.587 "enable_zerocopy_send_client": false, 00:37:08.587 "zerocopy_threshold": 0, 00:37:08.587 "tls_version": 0, 00:37:08.587 "enable_ktls": false 00:37:08.587 } 00:37:08.587 } 00:37:08.587 ] 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "subsystem": "vmd", 00:37:08.587 "config": [] 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "subsystem": "accel", 00:37:08.587 "config": [ 00:37:08.587 { 00:37:08.587 "method": "accel_set_options", 00:37:08.587 "params": { 00:37:08.587 "small_cache_size": 128, 00:37:08.587 "large_cache_size": 16, 00:37:08.587 "task_count": 2048, 00:37:08.587 "sequence_count": 2048, 00:37:08.587 "buf_count": 2048 00:37:08.587 } 00:37:08.587 } 00:37:08.587 ] 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "subsystem": "bdev", 00:37:08.587 "config": [ 00:37:08.587 { 00:37:08.587 "method": "bdev_set_options", 00:37:08.587 "params": { 00:37:08.587 "bdev_io_pool_size": 65535, 00:37:08.587 "bdev_io_cache_size": 256, 00:37:08.587 "bdev_auto_examine": true, 00:37:08.587 "iobuf_small_cache_size": 128, 00:37:08.587 "iobuf_large_cache_size": 16 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "bdev_raid_set_options", 00:37:08.587 "params": { 00:37:08.587 "process_window_size_kb": 1024 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "bdev_iscsi_set_options", 00:37:08.587 "params": { 00:37:08.587 "timeout_sec": 30 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "bdev_nvme_set_options", 00:37:08.587 "params": { 00:37:08.587 "action_on_timeout": "none", 00:37:08.587 "timeout_us": 0, 00:37:08.587 "timeout_admin_us": 0, 00:37:08.587 "keep_alive_timeout_ms": 10000, 00:37:08.587 "arbitration_burst": 0, 00:37:08.587 "low_priority_weight": 0, 00:37:08.587 "medium_priority_weight": 0, 00:37:08.587 "high_priority_weight": 0, 00:37:08.587 "nvme_adminq_poll_period_us": 10000, 00:37:08.587 "nvme_ioq_poll_period_us": 0, 00:37:08.587 "io_queue_requests": 512, 00:37:08.587 "delay_cmd_submit": true, 00:37:08.587 "transport_retry_count": 4, 00:37:08.587 "bdev_retry_count": 3, 00:37:08.587 "transport_ack_timeout": 0, 00:37:08.587 "ctrlr_loss_timeout_sec": 0, 00:37:08.587 "reconnect_delay_sec": 0, 00:37:08.587 "fast_io_fail_timeout_sec": 0, 00:37:08.587 "disable_auto_failback": false, 00:37:08.587 "generate_uuids": false, 00:37:08.587 "transport_tos": 0, 00:37:08.587 "nvme_error_stat": false, 00:37:08.587 "rdma_srq_size": 0, 00:37:08.587 "io_path_stat": false, 00:37:08.587 "allow_accel_sequence": false, 00:37:08.587 "rdma_max_cq_size": 0, 00:37:08.587 "rdma_cm_event_timeout_ms": 0, 00:37:08.587 "dhchap_digests": [ 00:37:08.587 "sha256", 00:37:08.587 "sha384", 00:37:08.587 "sha512" 00:37:08.587 ], 00:37:08.587 "dhchap_dhgroups": [ 00:37:08.587 "null", 00:37:08.587 "ffdhe2048", 00:37:08.587 "ffdhe3072", 00:37:08.587 "ffdhe4096", 00:37:08.587 "ffdhe6144", 00:37:08.587 "ffdhe8192" 00:37:08.587 ] 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "bdev_nvme_attach_controller", 00:37:08.587 "params": { 00:37:08.587 "name": "nvme0", 00:37:08.587 "trtype": "TCP", 00:37:08.587 "adrfam": "IPv4", 00:37:08.587 "traddr": "127.0.0.1", 00:37:08.587 "trsvcid": "4420", 00:37:08.587 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.587 "prchk_reftag": false, 00:37:08.587 "prchk_guard": false, 00:37:08.587 "ctrlr_loss_timeout_sec": 0, 00:37:08.587 "reconnect_delay_sec": 0, 00:37:08.587 "fast_io_fail_timeout_sec": 0, 00:37:08.587 "psk": "key0", 00:37:08.587 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:08.587 "hdgst": false, 00:37:08.587 "ddgst": false 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "bdev_nvme_set_hotplug", 00:37:08.587 "params": { 00:37:08.587 "period_us": 100000, 00:37:08.587 "enable": false 00:37:08.587 } 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "method": "bdev_wait_for_examine" 00:37:08.587 } 00:37:08.587 ] 00:37:08.587 }, 00:37:08.587 { 00:37:08.587 "subsystem": "nbd", 00:37:08.587 "config": [] 00:37:08.587 } 00:37:08.587 ] 00:37:08.587 }' 00:37:08.587 21:43:43 keyring_file -- keyring/file.sh@114 -- # killprocess 1089769 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1089769 ']' 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1089769 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1089769 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1089769' 00:37:08.587 killing process with pid 1089769 00:37:08.587 21:43:43 keyring_file -- common/autotest_common.sh@967 -- # kill 1089769 00:37:08.587 Received shutdown signal, test time was about 1.000000 seconds 00:37:08.587 00:37:08.587 Latency(us) 00:37:08.587 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:08.588 =================================================================================================================== 00:37:08.588 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@972 -- # wait 1089769 00:37:08.588 21:43:43 keyring_file -- keyring/file.sh@117 -- # bperfpid=1091209 00:37:08.588 21:43:43 keyring_file -- keyring/file.sh@119 -- # waitforlisten 1091209 /var/tmp/bperf.sock 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 1091209 ']' 00:37:08.588 21:43:43 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:08.588 21:43:43 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:37:08.588 "subsystems": [ 00:37:08.588 { 00:37:08.588 "subsystem": "keyring", 00:37:08.588 "config": [ 00:37:08.588 { 00:37:08.588 "method": "keyring_file_add_key", 00:37:08.588 "params": { 00:37:08.588 "name": "key0", 00:37:08.588 "path": "/tmp/tmp.LUw8iBk7Hr" 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "keyring_file_add_key", 00:37:08.588 "params": { 00:37:08.588 "name": "key1", 00:37:08.588 "path": "/tmp/tmp.DmKkp3YOmv" 00:37:08.588 } 00:37:08.588 } 00:37:08.588 ] 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "subsystem": "iobuf", 00:37:08.588 "config": [ 00:37:08.588 { 00:37:08.588 "method": "iobuf_set_options", 00:37:08.588 "params": { 00:37:08.588 "small_pool_count": 8192, 00:37:08.588 "large_pool_count": 1024, 00:37:08.588 "small_bufsize": 8192, 00:37:08.588 "large_bufsize": 135168 00:37:08.588 } 00:37:08.588 } 00:37:08.588 ] 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "subsystem": "sock", 00:37:08.588 "config": [ 00:37:08.588 { 00:37:08.588 "method": "sock_set_default_impl", 00:37:08.588 "params": { 00:37:08.588 "impl_name": "posix" 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "sock_impl_set_options", 00:37:08.588 "params": { 00:37:08.588 "impl_name": "ssl", 00:37:08.588 "recv_buf_size": 4096, 00:37:08.588 "send_buf_size": 4096, 00:37:08.588 "enable_recv_pipe": true, 00:37:08.588 "enable_quickack": false, 00:37:08.588 "enable_placement_id": 0, 00:37:08.588 "enable_zerocopy_send_server": true, 00:37:08.588 "enable_zerocopy_send_client": false, 00:37:08.588 "zerocopy_threshold": 0, 00:37:08.588 "tls_version": 0, 00:37:08.588 "enable_ktls": false 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "sock_impl_set_options", 00:37:08.588 "params": { 00:37:08.588 "impl_name": "posix", 00:37:08.588 "recv_buf_size": 2097152, 00:37:08.588 "send_buf_size": 2097152, 00:37:08.588 "enable_recv_pipe": true, 00:37:08.588 "enable_quickack": false, 00:37:08.588 "enable_placement_id": 0, 00:37:08.588 "enable_zerocopy_send_server": true, 00:37:08.588 "enable_zerocopy_send_client": false, 00:37:08.588 "zerocopy_threshold": 0, 00:37:08.588 "tls_version": 0, 00:37:08.588 "enable_ktls": false 00:37:08.588 } 00:37:08.588 } 00:37:08.588 ] 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "subsystem": "vmd", 00:37:08.588 "config": [] 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "subsystem": "accel", 00:37:08.588 "config": [ 00:37:08.588 { 00:37:08.588 "method": "accel_set_options", 00:37:08.588 "params": { 00:37:08.588 "small_cache_size": 128, 00:37:08.588 "large_cache_size": 16, 00:37:08.588 "task_count": 2048, 00:37:08.588 "sequence_count": 2048, 00:37:08.588 "buf_count": 2048 00:37:08.588 } 00:37:08.588 } 00:37:08.588 ] 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "subsystem": "bdev", 00:37:08.588 "config": [ 00:37:08.588 { 00:37:08.588 "method": "bdev_set_options", 00:37:08.588 "params": { 00:37:08.588 "bdev_io_pool_size": 65535, 00:37:08.588 "bdev_io_cache_size": 256, 00:37:08.588 "bdev_auto_examine": true, 00:37:08.588 "iobuf_small_cache_size": 128, 00:37:08.588 "iobuf_large_cache_size": 16 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "bdev_raid_set_options", 00:37:08.588 "params": { 00:37:08.588 "process_window_size_kb": 1024 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "bdev_iscsi_set_options", 00:37:08.588 "params": { 00:37:08.588 "timeout_sec": 30 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "bdev_nvme_set_options", 00:37:08.588 "params": { 00:37:08.588 "action_on_timeout": "none", 00:37:08.588 "timeout_us": 0, 00:37:08.588 "timeout_admin_us": 0, 00:37:08.588 "keep_alive_timeout_ms": 10000, 00:37:08.588 "arbitration_burst": 0, 00:37:08.588 "low_priority_weight": 0, 00:37:08.588 "medium_priority_weight": 0, 00:37:08.588 "high_priority_weight": 0, 00:37:08.588 "nvme_adminq_poll_period_us": 10000, 00:37:08.588 "nvme_ioq_poll_period_us": 0, 00:37:08.588 "io_queue_requests": 512, 00:37:08.588 "delay_cmd_submit": true, 00:37:08.588 "transport_retry_count": 4, 00:37:08.588 "bdev_retry_count": 3, 00:37:08.588 "transport_ack_timeout": 0, 00:37:08.588 "ctrlr_loss_timeout_sec": 0, 00:37:08.588 "reconnect_delay_sec": 0, 00:37:08.588 "fast_io_fail_timeout_sec": 0, 00:37:08.588 "disable_auto_failback": false, 00:37:08.588 "generate_uuids": false, 00:37:08.588 "transport_tos": 0, 00:37:08.588 "nvme_error_stat": false, 00:37:08.588 "rdma_srq_size": 0, 00:37:08.588 "io_path_stat": false, 00:37:08.588 "allow_accel_sequence": false, 00:37:08.588 "rdma_max_cq_size": 0, 00:37:08.588 "rdma_cm_event_timeout_ms": 0, 00:37:08.588 "dhchap_digests": [ 00:37:08.588 "sha256", 00:37:08.588 "sha384", 00:37:08.588 "sha512" 00:37:08.588 ], 00:37:08.588 "dhchap_dhgroups": [ 00:37:08.588 "null", 00:37:08.588 "ffdhe2048", 00:37:08.588 "ffdhe3072", 00:37:08.588 "ffdhe4096", 00:37:08.588 "ffdhe6144", 00:37:08.588 "ffdhe8192" 00:37:08.588 ] 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "bdev_nvme_attach_controller", 00:37:08.588 "params": { 00:37:08.588 "name": "nvme0", 00:37:08.588 "trtype": "TCP", 00:37:08.588 "adrfam": "IPv4", 00:37:08.588 "traddr": "127.0.0.1", 00:37:08.588 "trsvcid": "4420", 00:37:08.588 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:08.588 "prchk_reftag": false, 00:37:08.588 "prchk_guard": false, 00:37:08.588 "ctrlr_loss_timeout_sec": 0, 00:37:08.588 "reconnect_delay_sec": 0, 00:37:08.588 "fast_io_fail_timeout_sec": 0, 00:37:08.588 "psk": "key0", 00:37:08.588 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:08.588 "hdgst": false, 00:37:08.588 "ddgst": false 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "bdev_nvme_set_hotplug", 00:37:08.588 "params": { 00:37:08.588 "period_us": 100000, 00:37:08.588 "enable": false 00:37:08.588 } 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "method": "bdev_wait_for_examine" 00:37:08.588 } 00:37:08.588 ] 00:37:08.588 }, 00:37:08.588 { 00:37:08.588 "subsystem": "nbd", 00:37:08.588 "config": [] 00:37:08.588 } 00:37:08.588 ] 00:37:08.588 }' 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:08.588 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:08.588 21:43:43 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:08.846 [2024-07-11 21:43:43.366475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:08.846 [2024-07-11 21:43:43.366573] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091209 ] 00:37:08.846 EAL: No free 2048 kB hugepages reported on node 1 00:37:08.846 [2024-07-11 21:43:43.428388] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:08.846 [2024-07-11 21:43:43.518216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:09.105 [2024-07-11 21:43:43.709439] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:09.670 21:43:44 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:09.670 21:43:44 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:37:09.670 21:43:44 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:37:09.670 21:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.670 21:43:44 keyring_file -- keyring/file.sh@120 -- # jq length 00:37:09.928 21:43:44 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:37:09.928 21:43:44 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:37:09.928 21:43:44 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:37:09.928 21:43:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:09.928 21:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:09.928 21:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:09.928 21:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:37:10.185 21:43:44 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:37:10.185 21:43:44 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:37:10.185 21:43:44 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:37:10.185 21:43:44 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:37:10.185 21:43:44 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:10.185 21:43:44 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:10.185 21:43:44 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:37:10.443 21:43:45 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:37:10.443 21:43:45 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:37:10.443 21:43:45 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:37:10.443 21:43:45 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:37:10.702 21:43:45 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:37:10.702 21:43:45 keyring_file -- keyring/file.sh@1 -- # cleanup 00:37:10.702 21:43:45 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LUw8iBk7Hr /tmp/tmp.DmKkp3YOmv 00:37:10.702 21:43:45 keyring_file -- keyring/file.sh@20 -- # killprocess 1091209 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1091209 ']' 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1091209 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091209 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091209' 00:37:10.702 killing process with pid 1091209 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@967 -- # kill 1091209 00:37:10.702 Received shutdown signal, test time was about 1.000000 seconds 00:37:10.702 00:37:10.702 Latency(us) 00:37:10.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:10.702 =================================================================================================================== 00:37:10.702 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:37:10.702 21:43:45 keyring_file -- common/autotest_common.sh@972 -- # wait 1091209 00:37:10.960 21:43:45 keyring_file -- keyring/file.sh@21 -- # killprocess 1089765 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 1089765 ']' 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@952 -- # kill -0 1089765 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@953 -- # uname 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1089765 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1089765' 00:37:10.960 killing process with pid 1089765 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@967 -- # kill 1089765 00:37:10.960 [2024-07-11 21:43:45.588981] app.c:1023:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:37:10.960 21:43:45 keyring_file -- common/autotest_common.sh@972 -- # wait 1089765 00:37:11.527 00:37:11.528 real 0m14.071s 00:37:11.528 user 0m34.936s 00:37:11.528 sys 0m3.318s 00:37:11.528 21:43:46 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:11.528 21:43:46 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:37:11.528 ************************************ 00:37:11.528 END TEST keyring_file 00:37:11.528 ************************************ 00:37:11.528 21:43:46 -- common/autotest_common.sh@1142 -- # return 0 00:37:11.528 21:43:46 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:37:11.528 21:43:46 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:11.528 21:43:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:37:11.528 21:43:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:37:11.528 21:43:46 -- common/autotest_common.sh@10 -- # set +x 00:37:11.528 ************************************ 00:37:11.528 START TEST keyring_linux 00:37:11.528 ************************************ 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:37:11.528 * Looking for test storage... 00:37:11.528 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:11.528 21:43:46 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:11.528 21:43:46 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:11.528 21:43:46 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:11.528 21:43:46 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.528 21:43:46 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.528 21:43:46 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.528 21:43:46 keyring_linux -- paths/export.sh@5 -- # export PATH 00:37:11.528 21:43:46 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:37:11.528 /tmp/:spdk-test:key0 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:37:11.528 21:43:46 keyring_linux -- nvmf/common.sh@705 -- # python - 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:37:11.528 21:43:46 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:37:11.528 /tmp/:spdk-test:key1 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=1091589 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:37:11.528 21:43:46 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 1091589 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1091589 ']' 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:11.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:11.528 21:43:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:11.528 [2024-07-11 21:43:46.273520] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:11.528 [2024-07-11 21:43:46.273601] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091589 ] 00:37:11.787 EAL: No free 2048 kB hugepages reported on node 1 00:37:11.787 [2024-07-11 21:43:46.332456] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:11.787 [2024-07-11 21:43:46.420035] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:37:12.045 21:43:46 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:12.046 21:43:46 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:12.046 [2024-07-11 21:43:46.664083] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:12.046 null0 00:37:12.046 [2024-07-11 21:43:46.696136] tcp.c: 928:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:37:12.046 [2024-07-11 21:43:46.696614] tcp.c: 967:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:37:12.046 21:43:46 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:37:12.046 285730261 00:37:12.046 21:43:46 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:37:12.046 652165507 00:37:12.046 21:43:46 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=1091708 00:37:12.046 21:43:46 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:37:12.046 21:43:46 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 1091708 /var/tmp/bperf.sock 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 1091708 ']' 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:37:12.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:37:12.046 21:43:46 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:12.046 [2024-07-11 21:43:46.761152] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 23.11.0 initialization... 00:37:12.046 [2024-07-11 21:43:46.761222] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid1091708 ] 00:37:12.046 EAL: No free 2048 kB hugepages reported on node 1 00:37:12.303 [2024-07-11 21:43:46.820266] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:37:12.303 [2024-07-11 21:43:46.905834] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:37:12.303 21:43:46 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:37:12.303 21:43:46 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:37:12.303 21:43:46 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:37:12.303 21:43:46 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:37:12.561 21:43:47 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:37:12.561 21:43:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:37:12.820 21:43:47 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:12.820 21:43:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:37:13.078 [2024-07-11 21:43:47.766213] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:37:13.078 nvme0n1 00:37:13.357 21:43:47 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:37:13.357 21:43:47 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:37:13.357 21:43:47 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:13.357 21:43:47 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:13.357 21:43:47 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:13.357 21:43:47 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.357 21:43:48 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:37:13.357 21:43:48 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:13.357 21:43:48 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:37:13.357 21:43:48 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:37:13.357 21:43:48 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:37:13.357 21:43:48 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:13.357 21:43:48 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@25 -- # sn=285730261 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@26 -- # [[ 285730261 == \2\8\5\7\3\0\2\6\1 ]] 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 285730261 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:37:13.643 21:43:48 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:37:13.901 Running I/O for 1 seconds... 00:37:14.833 00:37:14.833 Latency(us) 00:37:14.833 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:14.833 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:37:14.833 nvme0n1 : 1.01 7667.50 29.95 0.00 0.00 16602.52 4805.97 24466.77 00:37:14.833 =================================================================================================================== 00:37:14.833 Total : 7667.50 29.95 0.00 0.00 16602.52 4805.97 24466.77 00:37:14.833 0 00:37:14.833 21:43:49 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:37:14.833 21:43:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:37:15.090 21:43:49 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:37:15.090 21:43:49 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:37:15.090 21:43:49 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:37:15.090 21:43:49 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:37:15.090 21:43:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:37:15.090 21:43:49 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:37:15.349 21:43:49 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:37:15.349 21:43:49 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:37:15.349 21:43:49 keyring_linux -- keyring/linux.sh@23 -- # return 00:37:15.349 21:43:49 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:37:15.349 21:43:49 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:15.349 21:43:49 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:37:15.607 [2024-07-11 21:43:50.251580] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:37:15.607 [2024-07-11 21:43:50.251834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe52860 (107): Transport endpoint is not connected 00:37:15.607 [2024-07-11 21:43:50.252827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe52860 (9): Bad file descriptor 00:37:15.607 [2024-07-11 21:43:50.253826] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:37:15.607 [2024-07-11 21:43:50.253847] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:37:15.607 [2024-07-11 21:43:50.253861] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:37:15.607 request: 00:37:15.607 { 00:37:15.607 "name": "nvme0", 00:37:15.607 "trtype": "tcp", 00:37:15.607 "traddr": "127.0.0.1", 00:37:15.607 "adrfam": "ipv4", 00:37:15.607 "trsvcid": "4420", 00:37:15.607 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:37:15.607 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:37:15.607 "prchk_reftag": false, 00:37:15.607 "prchk_guard": false, 00:37:15.607 "hdgst": false, 00:37:15.607 "ddgst": false, 00:37:15.607 "psk": ":spdk-test:key1", 00:37:15.607 "method": "bdev_nvme_attach_controller", 00:37:15.607 "req_id": 1 00:37:15.607 } 00:37:15.607 Got JSON-RPC error response 00:37:15.607 response: 00:37:15.607 { 00:37:15.607 "code": -5, 00:37:15.607 "message": "Input/output error" 00:37:15.607 } 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@33 -- # sn=285730261 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 285730261 00:37:15.607 1 links removed 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@33 -- # sn=652165507 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 652165507 00:37:15.607 1 links removed 00:37:15.607 21:43:50 keyring_linux -- keyring/linux.sh@41 -- # killprocess 1091708 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1091708 ']' 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1091708 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091708 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091708' 00:37:15.607 killing process with pid 1091708 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@967 -- # kill 1091708 00:37:15.607 Received shutdown signal, test time was about 1.000000 seconds 00:37:15.607 00:37:15.607 Latency(us) 00:37:15.607 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:37:15.607 =================================================================================================================== 00:37:15.607 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:37:15.607 21:43:50 keyring_linux -- common/autotest_common.sh@972 -- # wait 1091708 00:37:15.866 21:43:50 keyring_linux -- keyring/linux.sh@42 -- # killprocess 1091589 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 1091589 ']' 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 1091589 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 1091589 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 1091589' 00:37:15.866 killing process with pid 1091589 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@967 -- # kill 1091589 00:37:15.866 21:43:50 keyring_linux -- common/autotest_common.sh@972 -- # wait 1091589 00:37:16.432 00:37:16.433 real 0m4.933s 00:37:16.433 user 0m9.403s 00:37:16.433 sys 0m1.586s 00:37:16.433 21:43:50 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:37:16.433 21:43:50 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:37:16.433 ************************************ 00:37:16.433 END TEST keyring_linux 00:37:16.433 ************************************ 00:37:16.433 21:43:51 -- common/autotest_common.sh@1142 -- # return 0 00:37:16.433 21:43:51 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:37:16.433 21:43:51 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:37:16.433 21:43:51 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:37:16.433 21:43:51 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:37:16.433 21:43:51 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:37:16.433 21:43:51 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:37:16.433 21:43:51 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:37:16.433 21:43:51 -- common/autotest_common.sh@722 -- # xtrace_disable 00:37:16.433 21:43:51 -- common/autotest_common.sh@10 -- # set +x 00:37:16.433 21:43:51 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:37:16.433 21:43:51 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:37:16.433 21:43:51 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:37:16.433 21:43:51 -- common/autotest_common.sh@10 -- # set +x 00:37:18.332 INFO: APP EXITING 00:37:18.333 INFO: killing all VMs 00:37:18.333 INFO: killing vhost app 00:37:18.333 INFO: EXIT DONE 00:37:19.267 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:37:19.267 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:37:19.267 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:37:19.267 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:37:19.267 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:37:19.267 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:37:19.267 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:37:19.267 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:37:19.267 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:37:19.267 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:37:19.267 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:37:19.267 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:37:19.267 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:37:19.267 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:37:19.267 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:37:19.267 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:37:19.267 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:37:20.641 Cleaning 00:37:20.641 Removing: /var/run/dpdk/spdk0/config 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:37:20.641 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:37:20.641 Removing: /var/run/dpdk/spdk0/hugepage_info 00:37:20.641 Removing: /var/run/dpdk/spdk1/config 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:37:20.641 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:37:20.641 Removing: /var/run/dpdk/spdk1/hugepage_info 00:37:20.641 Removing: /var/run/dpdk/spdk1/mp_socket 00:37:20.641 Removing: /var/run/dpdk/spdk2/config 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:37:20.641 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:37:20.641 Removing: /var/run/dpdk/spdk2/hugepage_info 00:37:20.641 Removing: /var/run/dpdk/spdk3/config 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:37:20.641 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:37:20.641 Removing: /var/run/dpdk/spdk3/hugepage_info 00:37:20.641 Removing: /var/run/dpdk/spdk4/config 00:37:20.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:37:20.641 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:37:20.642 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:37:20.642 Removing: /var/run/dpdk/spdk4/hugepage_info 00:37:20.642 Removing: /dev/shm/bdev_svc_trace.1 00:37:20.642 Removing: /dev/shm/nvmf_trace.0 00:37:20.642 Removing: /dev/shm/spdk_tgt_trace.pid771454 00:37:20.642 Removing: /var/run/dpdk/spdk0 00:37:20.642 Removing: /var/run/dpdk/spdk1 00:37:20.642 Removing: /var/run/dpdk/spdk2 00:37:20.642 Removing: /var/run/dpdk/spdk3 00:37:20.642 Removing: /var/run/dpdk/spdk4 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1000643 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1000645 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1003417 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1003552 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1003688 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1003990 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1004074 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1005149 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1006325 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1007509 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1008706 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1010518 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1011788 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1015468 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1015809 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1017153 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1017934 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1021527 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1023493 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1026791 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1030232 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1036441 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1040960 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1041017 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1053594 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1054003 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1054403 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1054894 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1055392 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1055916 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1056324 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1056735 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1059222 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1059371 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1063150 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1063320 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1064939 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1069845 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1069850 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1072738 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1074245 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1076146 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1076896 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1078294 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1079164 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1084349 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1084708 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1085100 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1086654 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1087051 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1087334 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1089765 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1089769 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1091209 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1091589 00:37:20.642 Removing: /var/run/dpdk/spdk_pid1091708 00:37:20.642 Removing: /var/run/dpdk/spdk_pid769288 00:37:20.642 Removing: /var/run/dpdk/spdk_pid770131 00:37:20.642 Removing: /var/run/dpdk/spdk_pid771454 00:37:20.642 Removing: /var/run/dpdk/spdk_pid771895 00:37:20.642 Removing: /var/run/dpdk/spdk_pid772588 00:37:20.642 Removing: /var/run/dpdk/spdk_pid772727 00:37:20.642 Removing: /var/run/dpdk/spdk_pid773435 00:37:20.642 Removing: /var/run/dpdk/spdk_pid773454 00:37:20.642 Removing: /var/run/dpdk/spdk_pid773698 00:37:20.642 Removing: /var/run/dpdk/spdk_pid775005 00:37:20.642 Removing: /var/run/dpdk/spdk_pid775921 00:37:20.642 Removing: /var/run/dpdk/spdk_pid776236 00:37:20.642 Removing: /var/run/dpdk/spdk_pid776423 00:37:20.642 Removing: /var/run/dpdk/spdk_pid776625 00:37:20.642 Removing: /var/run/dpdk/spdk_pid776813 00:37:20.642 Removing: /var/run/dpdk/spdk_pid776974 00:37:20.642 Removing: /var/run/dpdk/spdk_pid777128 00:37:20.642 Removing: /var/run/dpdk/spdk_pid777310 00:37:20.642 Removing: /var/run/dpdk/spdk_pid777622 00:37:20.642 Removing: /var/run/dpdk/spdk_pid779970 00:37:20.642 Removing: /var/run/dpdk/spdk_pid780134 00:37:20.642 Removing: /var/run/dpdk/spdk_pid780296 00:37:20.642 Removing: /var/run/dpdk/spdk_pid780428 00:37:20.642 Removing: /var/run/dpdk/spdk_pid780728 00:37:20.642 Removing: /var/run/dpdk/spdk_pid780861 00:37:20.642 Removing: /var/run/dpdk/spdk_pid781164 00:37:20.642 Removing: /var/run/dpdk/spdk_pid781175 00:37:20.642 Removing: /var/run/dpdk/spdk_pid781467 00:37:20.642 Removing: /var/run/dpdk/spdk_pid781474 00:37:20.642 Removing: /var/run/dpdk/spdk_pid781642 00:37:20.642 Removing: /var/run/dpdk/spdk_pid781773 00:37:20.642 Removing: /var/run/dpdk/spdk_pid782136 00:37:20.642 Removing: /var/run/dpdk/spdk_pid782294 00:37:20.642 Removing: /var/run/dpdk/spdk_pid782502 00:37:20.642 Removing: /var/run/dpdk/spdk_pid782655 00:37:20.642 Removing: /var/run/dpdk/spdk_pid782801 00:37:20.642 Removing: /var/run/dpdk/spdk_pid782868 00:37:20.642 Removing: /var/run/dpdk/spdk_pid783130 00:37:20.642 Removing: /var/run/dpdk/spdk_pid783303 00:37:20.642 Removing: /var/run/dpdk/spdk_pid783455 00:37:20.642 Removing: /var/run/dpdk/spdk_pid783614 00:37:20.642 Removing: /var/run/dpdk/spdk_pid783882 00:37:20.642 Removing: /var/run/dpdk/spdk_pid784044 00:37:20.642 Removing: /var/run/dpdk/spdk_pid784200 00:37:20.903 Removing: /var/run/dpdk/spdk_pid784366 00:37:20.903 Removing: /var/run/dpdk/spdk_pid784629 00:37:20.903 Removing: /var/run/dpdk/spdk_pid784786 00:37:20.903 Removing: /var/run/dpdk/spdk_pid784945 00:37:20.903 Removing: /var/run/dpdk/spdk_pid785211 00:37:20.903 Removing: /var/run/dpdk/spdk_pid785378 00:37:20.903 Removing: /var/run/dpdk/spdk_pid785531 00:37:20.903 Removing: /var/run/dpdk/spdk_pid785684 00:37:20.903 Removing: /var/run/dpdk/spdk_pid785958 00:37:20.903 Removing: /var/run/dpdk/spdk_pid786123 00:37:20.903 Removing: /var/run/dpdk/spdk_pid786284 00:37:20.903 Removing: /var/run/dpdk/spdk_pid786500 00:37:20.903 Removing: /var/run/dpdk/spdk_pid786712 00:37:20.903 Removing: /var/run/dpdk/spdk_pid786781 00:37:20.903 Removing: /var/run/dpdk/spdk_pid786987 00:37:20.903 Removing: /var/run/dpdk/spdk_pid789159 00:37:20.903 Removing: /var/run/dpdk/spdk_pid843139 00:37:20.903 Removing: /var/run/dpdk/spdk_pid845739 00:37:20.903 Removing: /var/run/dpdk/spdk_pid852691 00:37:20.903 Removing: /var/run/dpdk/spdk_pid855857 00:37:20.903 Removing: /var/run/dpdk/spdk_pid858206 00:37:20.903 Removing: /var/run/dpdk/spdk_pid858711 00:37:20.903 Removing: /var/run/dpdk/spdk_pid863193 00:37:20.903 Removing: /var/run/dpdk/spdk_pid867028 00:37:20.903 Removing: /var/run/dpdk/spdk_pid867034 00:37:20.903 Removing: /var/run/dpdk/spdk_pid867695 00:37:20.903 Removing: /var/run/dpdk/spdk_pid868229 00:37:20.903 Removing: /var/run/dpdk/spdk_pid868884 00:37:20.903 Removing: /var/run/dpdk/spdk_pid869279 00:37:20.903 Removing: /var/run/dpdk/spdk_pid869291 00:37:20.903 Removing: /var/run/dpdk/spdk_pid869548 00:37:20.903 Removing: /var/run/dpdk/spdk_pid869561 00:37:20.903 Removing: /var/run/dpdk/spdk_pid869672 00:37:20.903 Removing: /var/run/dpdk/spdk_pid870226 00:37:20.903 Removing: /var/run/dpdk/spdk_pid870885 00:37:20.903 Removing: /var/run/dpdk/spdk_pid871533 00:37:20.903 Removing: /var/run/dpdk/spdk_pid871930 00:37:20.903 Removing: /var/run/dpdk/spdk_pid871947 00:37:20.903 Removing: /var/run/dpdk/spdk_pid872081 00:37:20.903 Removing: /var/run/dpdk/spdk_pid872964 00:37:20.904 Removing: /var/run/dpdk/spdk_pid873687 00:37:20.904 Removing: /var/run/dpdk/spdk_pid879026 00:37:20.904 Removing: /var/run/dpdk/spdk_pid879301 00:37:20.904 Removing: /var/run/dpdk/spdk_pid881796 00:37:20.904 Removing: /var/run/dpdk/spdk_pid885490 00:37:20.904 Removing: /var/run/dpdk/spdk_pid887536 00:37:20.904 Removing: /var/run/dpdk/spdk_pid894410 00:37:20.904 Removing: /var/run/dpdk/spdk_pid899597 00:37:20.904 Removing: /var/run/dpdk/spdk_pid900787 00:37:20.904 Removing: /var/run/dpdk/spdk_pid901454 00:37:20.904 Removing: /var/run/dpdk/spdk_pid911625 00:37:20.904 Removing: /var/run/dpdk/spdk_pid913720 00:37:20.904 Removing: /var/run/dpdk/spdk_pid938983 00:37:20.904 Removing: /var/run/dpdk/spdk_pid941764 00:37:20.904 Removing: /var/run/dpdk/spdk_pid942940 00:37:20.904 Removing: /var/run/dpdk/spdk_pid944178 00:37:20.904 Removing: /var/run/dpdk/spdk_pid944274 00:37:20.904 Removing: /var/run/dpdk/spdk_pid944413 00:37:20.904 Removing: /var/run/dpdk/spdk_pid944552 00:37:20.904 Removing: /var/run/dpdk/spdk_pid944865 00:37:20.904 Removing: /var/run/dpdk/spdk_pid946180 00:37:20.904 Removing: /var/run/dpdk/spdk_pid946802 00:37:20.904 Removing: /var/run/dpdk/spdk_pid947201 00:37:20.904 Removing: /var/run/dpdk/spdk_pid949431 00:37:20.904 Removing: /var/run/dpdk/spdk_pid949732 00:37:20.904 Removing: /var/run/dpdk/spdk_pid950296 00:37:20.904 Removing: /var/run/dpdk/spdk_pid952680 00:37:20.904 Removing: /var/run/dpdk/spdk_pid955934 00:37:20.904 Removing: /var/run/dpdk/spdk_pid959480 00:37:20.904 Removing: /var/run/dpdk/spdk_pid983128 00:37:20.904 Removing: /var/run/dpdk/spdk_pid985766 00:37:20.904 Removing: /var/run/dpdk/spdk_pid989642 00:37:20.904 Removing: /var/run/dpdk/spdk_pid990586 00:37:20.904 Removing: /var/run/dpdk/spdk_pid991661 00:37:20.904 Removing: /var/run/dpdk/spdk_pid994092 00:37:20.904 Removing: /var/run/dpdk/spdk_pid996443 00:37:20.904 Clean 00:37:20.904 21:43:55 -- common/autotest_common.sh@1451 -- # return 0 00:37:20.904 21:43:55 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:37:20.904 21:43:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:20.904 21:43:55 -- common/autotest_common.sh@10 -- # set +x 00:37:21.162 21:43:55 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:37:21.162 21:43:55 -- common/autotest_common.sh@728 -- # xtrace_disable 00:37:21.162 21:43:55 -- common/autotest_common.sh@10 -- # set +x 00:37:21.162 21:43:55 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:37:21.162 21:43:55 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:37:21.162 21:43:55 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:37:21.162 21:43:55 -- spdk/autotest.sh@391 -- # hash lcov 00:37:21.162 21:43:55 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:37:21.162 21:43:55 -- spdk/autotest.sh@393 -- # hostname 00:37:21.162 21:43:55 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:37:21.162 geninfo: WARNING: invalid characters removed from testname! 00:37:53.280 21:44:23 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:53.280 21:44:27 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:55.805 21:44:30 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:37:59.101 21:44:33 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:01.629 21:44:36 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:04.909 21:44:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:38:07.460 21:44:41 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:38:07.460 21:44:41 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:07.460 21:44:41 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:38:07.460 21:44:41 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:07.460 21:44:41 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:07.460 21:44:41 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.461 21:44:41 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.461 21:44:41 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.461 21:44:41 -- paths/export.sh@5 -- $ export PATH 00:38:07.461 21:44:41 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:07.461 21:44:41 -- common/autobuild_common.sh@443 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:38:07.461 21:44:41 -- common/autobuild_common.sh@444 -- $ date +%s 00:38:07.461 21:44:41 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720727081.XXXXXX 00:38:07.461 21:44:41 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720727081.Ts0pDM 00:38:07.461 21:44:41 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:38:07.461 21:44:41 -- common/autobuild_common.sh@450 -- $ '[' -n v23.11 ']' 00:38:07.461 21:44:41 -- common/autobuild_common.sh@451 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:38:07.461 21:44:41 -- common/autobuild_common.sh@451 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:38:07.461 21:44:41 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:38:07.461 21:44:41 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:38:07.461 21:44:41 -- common/autobuild_common.sh@460 -- $ get_config_params 00:38:07.461 21:44:41 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:38:07.461 21:44:41 -- common/autotest_common.sh@10 -- $ set +x 00:38:07.461 21:44:41 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:38:07.461 21:44:41 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:38:07.461 21:44:41 -- pm/common@17 -- $ local monitor 00:38:07.461 21:44:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:07.461 21:44:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:07.461 21:44:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:07.461 21:44:41 -- pm/common@21 -- $ date +%s 00:38:07.461 21:44:41 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:07.461 21:44:41 -- pm/common@21 -- $ date +%s 00:38:07.461 21:44:41 -- pm/common@25 -- $ sleep 1 00:38:07.461 21:44:41 -- pm/common@21 -- $ date +%s 00:38:07.461 21:44:41 -- pm/common@21 -- $ date +%s 00:38:07.461 21:44:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720727081 00:38:07.461 21:44:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720727081 00:38:07.461 21:44:41 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720727081 00:38:07.461 21:44:41 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1720727081 00:38:07.461 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720727081_collect-vmstat.pm.log 00:38:07.461 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720727081_collect-cpu-load.pm.log 00:38:07.461 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720727081_collect-cpu-temp.pm.log 00:38:07.461 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1720727081_collect-bmc-pm.bmc.pm.log 00:38:08.395 21:44:42 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:38:08.395 21:44:42 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:38:08.395 21:44:42 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:08.395 21:44:42 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:38:08.395 21:44:42 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:38:08.395 21:44:42 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:38:08.395 21:44:42 -- spdk/autopackage.sh@19 -- $ timing_finish 00:38:08.395 21:44:42 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:38:08.395 21:44:42 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:38:08.395 21:44:42 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:38:08.395 21:44:42 -- spdk/autopackage.sh@20 -- $ exit 0 00:38:08.395 21:44:42 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:38:08.395 21:44:42 -- pm/common@29 -- $ signal_monitor_resources TERM 00:38:08.395 21:44:42 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:38:08.395 21:44:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:08.395 21:44:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:38:08.395 21:44:42 -- pm/common@44 -- $ pid=1102858 00:38:08.395 21:44:42 -- pm/common@50 -- $ kill -TERM 1102858 00:38:08.395 21:44:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:08.395 21:44:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:38:08.395 21:44:42 -- pm/common@44 -- $ pid=1102860 00:38:08.395 21:44:42 -- pm/common@50 -- $ kill -TERM 1102860 00:38:08.395 21:44:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:08.395 21:44:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:38:08.395 21:44:42 -- pm/common@44 -- $ pid=1102862 00:38:08.395 21:44:42 -- pm/common@50 -- $ kill -TERM 1102862 00:38:08.395 21:44:42 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:38:08.395 21:44:42 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:38:08.395 21:44:42 -- pm/common@44 -- $ pid=1102890 00:38:08.395 21:44:42 -- pm/common@50 -- $ sudo -E kill -TERM 1102890 00:38:08.395 + [[ -n 664009 ]] 00:38:08.395 + sudo kill 664009 00:38:08.411 [Pipeline] } 00:38:08.435 [Pipeline] // stage 00:38:08.438 [Pipeline] } 00:38:08.449 [Pipeline] // timeout 00:38:08.453 [Pipeline] } 00:38:08.465 [Pipeline] // catchError 00:38:08.468 [Pipeline] } 00:38:08.478 [Pipeline] // wrap 00:38:08.481 [Pipeline] } 00:38:08.491 [Pipeline] // catchError 00:38:08.496 [Pipeline] stage 00:38:08.498 [Pipeline] { (Epilogue) 00:38:08.507 [Pipeline] catchError 00:38:08.508 [Pipeline] { 00:38:08.518 [Pipeline] echo 00:38:08.519 Cleanup processes 00:38:08.522 [Pipeline] sh 00:38:08.803 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:08.803 1103009 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:38:08.803 1103124 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:08.818 [Pipeline] sh 00:38:09.102 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:38:09.103 ++ grep -v 'sudo pgrep' 00:38:09.103 ++ awk '{print $1}' 00:38:09.103 + sudo kill -9 1103009 00:38:09.117 [Pipeline] sh 00:38:09.405 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:38:21.635 [Pipeline] sh 00:38:21.918 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:38:21.919 Artifacts sizes are good 00:38:21.935 [Pipeline] archiveArtifacts 00:38:21.944 Archiving artifacts 00:38:22.163 [Pipeline] sh 00:38:22.445 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:38:22.458 [Pipeline] cleanWs 00:38:22.468 [WS-CLEANUP] Deleting project workspace... 00:38:22.468 [WS-CLEANUP] Deferred wipeout is used... 00:38:22.475 [WS-CLEANUP] done 00:38:22.477 [Pipeline] } 00:38:22.497 [Pipeline] // catchError 00:38:22.509 [Pipeline] sh 00:38:22.790 + logger -p user.info -t JENKINS-CI 00:38:22.798 [Pipeline] } 00:38:22.813 [Pipeline] // stage 00:38:22.818 [Pipeline] } 00:38:22.833 [Pipeline] // node 00:38:22.838 [Pipeline] End of Pipeline 00:38:22.861 Finished: SUCCESS